mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-08 10:09:46 +00:00
Compare commits
150 Commits
v0.0.116
...
harrison/a
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5df689ec3c | ||
|
|
9246c5d226 | ||
|
|
d4315f2eb0 | ||
|
|
82e9b211dc | ||
|
|
477857bc39 | ||
|
|
ae7aa0eaa4 | ||
|
|
8c66688454 | ||
|
|
66a84307f1 | ||
|
|
cb752c66de | ||
|
|
0cbe33a06b | ||
|
|
f5a4bf0ce4 | ||
|
|
a0453ebcf5 | ||
|
|
ffb7de34ca | ||
|
|
09085c32e3 | ||
|
|
a8daffdb9e | ||
|
|
8b91a21e37 | ||
|
|
55b52bad21 | ||
|
|
b35260ed47 | ||
|
|
7bea3b302c | ||
|
|
ba4e090c6c | ||
|
|
e75247e6a6 | ||
|
|
22aa27dfb6 | ||
|
|
961b05a8c7 | ||
|
|
4ae228a3d0 | ||
|
|
b5449a866d | ||
|
|
8441cbfc03 | ||
|
|
4ab66c4f52 | ||
|
|
27f80784d0 | ||
|
|
031e32f331 | ||
|
|
ccee1aedd2 | ||
|
|
e2c26909f2 | ||
|
|
3e879b47c1 | ||
|
|
859502b16c | ||
|
|
c33e055f17 | ||
|
|
a5bf8c9b9d | ||
|
|
0874872dee | ||
|
|
ef25904ecb | ||
|
|
9d6f649ba5 | ||
|
|
c58932e8fd | ||
|
|
6e85cbcce3 | ||
|
|
b25dbcb5b3 | ||
|
|
a554e94a1a | ||
|
|
5f34dffedc | ||
|
|
aff33d52c5 | ||
|
|
f16c1fb6df | ||
|
|
a9e1043673 | ||
|
|
f281033362 | ||
|
|
410bf37fb8 | ||
|
|
eff5eed719 | ||
|
|
d0a56f47ee | ||
|
|
9e74df2404 | ||
|
|
0bee219cb3 | ||
|
|
923a7dde5a | ||
|
|
4cd5cf2e95 | ||
|
|
33ebb05251 | ||
|
|
e0331b55bb | ||
|
|
d5825bd3e8 | ||
|
|
e8d9cbca3f | ||
|
|
b5020c7d9c | ||
|
|
5bea731fb4 | ||
|
|
0e3b0c827e | ||
|
|
365669a7fd | ||
|
|
b7f392fdd6 | ||
|
|
4be2f9d75a | ||
|
|
f74a1bebf5 | ||
|
|
76ecca4d53 | ||
|
|
b7ebb8fe30 | ||
|
|
41c8a42e22 | ||
|
|
1cc9e90041 | ||
|
|
30e3b31b04 | ||
|
|
a0cd6672aa | ||
|
|
8b5a43d720 | ||
|
|
725b668aef | ||
|
|
024efb09f8 | ||
|
|
953e58d004 | ||
|
|
f257b08406 | ||
|
|
5e91928607 | ||
|
|
880a6a3db5 | ||
|
|
71e8eaff2b | ||
|
|
6598beacdb | ||
|
|
e4f15e4eac | ||
|
|
e50c1ea7fb | ||
|
|
62e08f80de | ||
|
|
c50fafb35d | ||
|
|
3d3e523520 | ||
|
|
c1a9d83b34 | ||
|
|
42d725223e | ||
|
|
0bbcc7815b | ||
|
|
b26fa1935d | ||
|
|
bc2ed93b77 | ||
|
|
c71f2a7b26 | ||
|
|
51681f653f | ||
|
|
705431aecc | ||
|
|
b83e826510 | ||
|
|
e7d6de6b1c | ||
|
|
6e0d3880df | ||
|
|
6ec5780547 | ||
|
|
47d37db2d2 | ||
|
|
4f364db9a9 | ||
|
|
030ce9f506 | ||
|
|
8990122d5d | ||
|
|
52d6bf04d0 | ||
|
|
910da8518f | ||
|
|
2f27ef92fe | ||
|
|
75149d6d38 | ||
|
|
fab7994b74 | ||
|
|
eb80d6e0e4 | ||
|
|
b5667bed9e | ||
|
|
b3be83c750 | ||
|
|
50626a10ee | ||
|
|
6e1b5b8f7e | ||
|
|
eec9b1b306 | ||
|
|
ea142f6a32 | ||
|
|
12f868b292 | ||
|
|
31f9ecfc19 | ||
|
|
273e9bf296 | ||
|
|
f155d9d3ec | ||
|
|
d3d4503ce2 | ||
|
|
1f93c5cf69 | ||
|
|
15b5a08f4b | ||
|
|
ff4a25b841 | ||
|
|
2212520a6c | ||
|
|
d08f940336 | ||
|
|
2280a2cb2f | ||
|
|
ce5d97bcb3 | ||
|
|
8fa1764c60 | ||
|
|
f299bd1416 | ||
|
|
064be93edf | ||
|
|
86822d1cc2 | ||
|
|
a581bce379 | ||
|
|
2ffc643086 | ||
|
|
2136dc94bb | ||
|
|
a92344f476 | ||
|
|
b706966ebc | ||
|
|
1c22657256 | ||
|
|
6f02286805 | ||
|
|
3674074eb0 | ||
|
|
a7e09d46c5 | ||
|
|
fa2e546b76 | ||
|
|
c592b12043 | ||
|
|
9555bbd5bb | ||
|
|
0ca1641b14 | ||
|
|
d5b4393bb2 | ||
|
|
7b6ff7fe00 | ||
|
|
76c7b1f677 | ||
|
|
5aa8ece211 | ||
|
|
f6d24d5740 | ||
|
|
b1c4480d7c | ||
|
|
b6ba989f2f | ||
|
|
04acda55ec |
2
.github/CONTRIBUTING.md
vendored
2
.github/CONTRIBUTING.md
vendored
@@ -73,6 +73,8 @@ poetry install -E all
|
||||
|
||||
This will install all requirements for running the package, examples, linting, formatting, tests, and coverage. Note the `-E all` flag will install all optional dependencies necessary for integration testing.
|
||||
|
||||
❗Note: If you're running Poetry 1.4.1 and receive a `WheelFileValidationError` for `debugpy` during installation, you can try either downgrading to Poetry 1.4.0 or disabling "modern installation" (`poetry config installer.modern-installation false`) and re-install requirements. See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
|
||||
|
||||
Now, you should be able to run the common tasks in the following section.
|
||||
|
||||
## ✅Common Tasks
|
||||
|
||||
5
.gitignore
vendored
5
.gitignore
vendored
@@ -136,5 +136,8 @@ dmypy.json
|
||||
# macOS display setting files
|
||||
.DS_Store
|
||||
|
||||
# Wandb directory
|
||||
wandb/
|
||||
|
||||
# asdf tool versions
|
||||
.tool-versions
|
||||
.tool-versions
|
||||
|
||||
@@ -32,7 +32,7 @@ This library is aimed at assisting in the development of those types of applicat
|
||||
|
||||
**🤖 Agents**
|
||||
|
||||
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/agents.html)
|
||||
- [Documentation](https://langchain.readthedocs.io/en/latest/modules/agents.html)
|
||||
- End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
|
||||
|
||||
## 📖 Documentation
|
||||
|
||||
@@ -37,3 +37,6 @@ A minimal example on how to run LangChain on Vercel using Flask.
|
||||
## [SteamShip](https://github.com/steamship-core/steamship-langchain/)
|
||||
This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship.
|
||||
This includes: production ready endpoints, horizontal scaling across dependencies, persistant storage of app state, multi-tenancy support, etc.
|
||||
|
||||
## [Langchain-serve](https://github.com/jina-ai/langchain-serve)
|
||||
This repository allows users to serve local chains and agents as RESTful, gRPC, or Websocket APIs thanks to [Jina](https://docs.jina.ai/). Deploy your chains & agents with ease and enjoy independent scaling, serverless and autoscaling APIs, as well as a Streamlit playground on Jina AI Cloud.
|
||||
|
||||
@@ -24,4 +24,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import AtlasDB
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the AtlasDB wrapper, see [this notebook](../modules/indexes/vectorstore_examples/atlas.ipynb)
|
||||
For a more detailed walkthrough of the AtlasDB wrapper, see [this notebook](../modules/indexes/vectorstores/examples/atlas.ipynb)
|
||||
|
||||
@@ -17,4 +17,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Chroma
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)
|
||||
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/vectorstores/getting_started.ipynb)
|
||||
|
||||
@@ -22,4 +22,4 @@ There exists an Cohere Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import CohereEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/cohere.ipynb)
|
||||
|
||||
@@ -22,4 +22,4 @@ from langchain.vectorstores import DeepLake
|
||||
```
|
||||
|
||||
|
||||
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](../modules/indexes/vectorstore_examples/deeplake.ipynb)
|
||||
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](../modules/indexes/vectorstores/examples/deeplake.ipynb)
|
||||
|
||||
@@ -18,7 +18,7 @@ There exists a GoogleSearchAPIWrapper utility which wraps this API. To import th
|
||||
from langchain.utilities import GoogleSearchAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_search.ipynb).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/google_search.ipynb).
|
||||
|
||||
### Tool
|
||||
|
||||
@@ -29,4 +29,4 @@ from langchain.agents import load_tools
|
||||
tools = load_tools(["google-search"])
|
||||
```
|
||||
|
||||
For more information on this, see [this page](../modules/agents/tools.md)
|
||||
For more information on this, see [this page](../modules/agents/tools/getting_started.md)
|
||||
|
||||
@@ -58,7 +58,7 @@ So the final answer is: El Palmar, Spain
|
||||
'El Palmar, Spain'
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_serper.ipynb).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/google_serper.ipynb).
|
||||
|
||||
### Tool
|
||||
|
||||
@@ -69,4 +69,4 @@ from langchain.agents import load_tools
|
||||
tools = load_tools(["google-serper"])
|
||||
```
|
||||
|
||||
For more information on this, see [this page](../modules/agents/tools.md)
|
||||
For more information on this, see [this page](../modules/agents/tools/getting_started.md)
|
||||
|
||||
@@ -30,7 +30,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
|
||||
```python
|
||||
from langchain.llms import HuggingFaceHub
|
||||
```
|
||||
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](../modules/llms/integrations/huggingface_hub.ipynb)
|
||||
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](../modules/models/llms/integrations/huggingface_hub.ipynb)
|
||||
|
||||
|
||||
### Embeddings
|
||||
@@ -47,7 +47,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
|
||||
```python
|
||||
from langchain.embeddings import HuggingFaceHubEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/huggingfacehub.ipynb)
|
||||
|
||||
### Tokenizer
|
||||
|
||||
@@ -59,7 +59,7 @@ You can also use it to count tokens when splitting documents with
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
CharacterTextSplitter.from_huggingface_tokenizer(...)
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/text_splitters/examples/huggingface_length_function.ipynb)
|
||||
|
||||
|
||||
### Datasets
|
||||
|
||||
18
docs/ecosystem/jina.md
Normal file
18
docs/ecosystem/jina.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# Jina
|
||||
|
||||
This page covers how to use the Jina ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Jina wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install jina`
|
||||
- Get a Jina AI Cloud auth token from [here](https://cloud.jina.ai/settings/tokens) and set it as an environment variable (`JINA_AUTH_TOKEN`)
|
||||
|
||||
## Wrappers
|
||||
|
||||
### Embeddings
|
||||
|
||||
There exists a Jina Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import JinaEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
|
||||
20
docs/ecosystem/milvus.md
Normal file
20
docs/ecosystem/milvus.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Milvus
|
||||
|
||||
This page covers how to use the Milvus ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Milvus wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install pymilvus`
|
||||
## Wrappers
|
||||
|
||||
### VectorStore
|
||||
|
||||
There exists a wrapper around Milvus indexes, allowing you to use it as a vectorstore,
|
||||
whether for semantic search or example selection.
|
||||
|
||||
To import this vectorstore:
|
||||
```python
|
||||
from langchain.vectorstores import Milvus
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](../modules/indexes/vectorstores/examples/milvus.ipynb)
|
||||
@@ -21,7 +21,7 @@ If you are using a model hosted on Azure, you should use different wrapper for t
|
||||
```python
|
||||
from langchain.llms import AzureOpenAI
|
||||
```
|
||||
For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/llms/integrations/azure_openai_example.ipynb)
|
||||
For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/models/llms/integrations/azure_openai_example.ipynb)
|
||||
|
||||
|
||||
|
||||
@@ -31,7 +31,7 @@ There exists an OpenAI Embeddings wrapper, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import OpenAIEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/openai.ipynb)
|
||||
|
||||
|
||||
### Tokenizer
|
||||
@@ -44,7 +44,7 @@ You can also use it to count tokens when splitting documents with
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
CharacterTextSplitter.from_tiktoken_encoder(...)
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/text_splitters/examples/tiktoken.ipynb)
|
||||
|
||||
### Moderation
|
||||
You can also access the OpenAI content moderation endpoint with
|
||||
|
||||
@@ -18,4 +18,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import OpenSearchVectorSearch
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstore_examples/opensearch.ipynb)
|
||||
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstores/examples/opensearch.ipynb)
|
||||
|
||||
@@ -26,4 +26,4 @@ from langchain.vectorstores.pgvector import PGVector
|
||||
|
||||
### Usage
|
||||
|
||||
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](../modules/indexes/vectorstore_examples/pgvector.ipynb)
|
||||
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](../modules/indexes/vectorstores/examples/pgvector.ipynb)
|
||||
|
||||
@@ -17,4 +17,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Pinecone
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/vectorstore_examples/pinecone.ipynb)
|
||||
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/vectorstores/examples/pinecone.ipynb)
|
||||
|
||||
@@ -40,10 +40,10 @@ for res in llm_results.generations:
|
||||
```
|
||||
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. [Read more about it here](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
||||
|
||||
This LLM is identical to the [OpenAI LLM](./openai), except that
|
||||
This LLM is identical to the [OpenAI LLM](./openai.md), except that
|
||||
- all your requests will be logged to your PromptLayer account
|
||||
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer
|
||||
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
||||
|
||||
|
||||
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/chat/examples/promptlayer_chat_openai.ipynb) and `PromptLayerOpenAIChat`
|
||||
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/models/chat/integrations/promptlayer_chatopenai.ipynb) and `PromptLayerOpenAIChat`
|
||||
|
||||
20
docs/ecosystem/qdrant.md
Normal file
20
docs/ecosystem/qdrant.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Qdrant
|
||||
|
||||
This page covers how to use the Qdrant ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Qdrant wrappers.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install qdrant-client`
|
||||
## Wrappers
|
||||
|
||||
### VectorStore
|
||||
|
||||
There exists a wrapper around Qdrant indexes, allowing you to use it as a vectorstore,
|
||||
whether for semantic search or example selection.
|
||||
|
||||
To import this vectorstore:
|
||||
```python
|
||||
from langchain.vectorstores import Qdrant
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](../modules/indexes/vectorstores/examples/qdrant.ipynb)
|
||||
47
docs/ecosystem/replicate.md
Normal file
47
docs/ecosystem/replicate.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Replicate
|
||||
This page covers how to run models on Replicate within LangChain.
|
||||
|
||||
## Installation and Setup
|
||||
- Create a [Replicate](https://replicate.com) account. Get your API key and set it as an environment variable (`REPLICATE_API_TOKEN`)
|
||||
- Install the [Replicate python client](https://github.com/replicate/replicate-python) with `pip install replicate`
|
||||
|
||||
## Calling a model
|
||||
|
||||
Find a model on the [Replicate explore page](https://replicate.com/explore), and then paste in the model name and version in this format: `owner-name/model-name:version`
|
||||
|
||||
For example, for this [flan-t5 model](https://replicate.com/daanelson/flan-t5), click on the API tab. The model name/version would be: `daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8`
|
||||
|
||||
Only the `model` param is required, but any other model parameters can also be passed in with the format `input={model_param: value, ...}`
|
||||
|
||||
|
||||
For example, if we were running stable diffusion and wanted to change the image dimensions:
|
||||
|
||||
```
|
||||
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
|
||||
```
|
||||
|
||||
*Note that only the first output of a model will be returned.*
|
||||
From here, we can initialize our model:
|
||||
|
||||
```python
|
||||
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8")
|
||||
```
|
||||
|
||||
And run it:
|
||||
|
||||
```python
|
||||
prompt = """
|
||||
Answer the following yes/no question by reasoning step by step.
|
||||
Can a dog drive a car?
|
||||
"""
|
||||
llm(prompt)
|
||||
```
|
||||
|
||||
We can call any Replicate model (not just LLMs) using this syntax. For example, we can call [Stable Diffusion](https://replicate.com/stability-ai/stable-diffusion):
|
||||
|
||||
```python
|
||||
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
|
||||
input={'image_dimensions'='512x512'}
|
||||
|
||||
image_output = text2image("A cat riding a motorcycle by Picasso")
|
||||
```
|
||||
@@ -15,7 +15,7 @@ custom LLMs, you can use the `SelfHostedPipeline` parent class.
|
||||
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/llms/integrations/self_hosted_examples.ipynb)
|
||||
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/models/llms/integrations/self_hosted_examples.ipynb)
|
||||
|
||||
## Self-hosted Embeddings
|
||||
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
|
||||
@@ -26,6 +26,4 @@ the `SelfHostedEmbedding` class.
|
||||
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
|
||||
|
||||
##
|
||||
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/models/text_embedding/examples/self-hosted.ipynb)
|
||||
|
||||
@@ -47,12 +47,24 @@ s.run("what is a large language model?")
|
||||
|
||||
### Tool
|
||||
|
||||
You can also easily load this wrapper as a Tool (to use with an Agent).
|
||||
You can also load this wrapper as a Tool (to use with an Agent).
|
||||
|
||||
You can do this with:
|
||||
|
||||
```python
|
||||
from langchain.agents import load_tools
|
||||
tools = load_tools(["searx-search"], searx_host="http://localhost:8888")
|
||||
tools = load_tools(["searx-search"],
|
||||
searx_host="http://localhost:8888",
|
||||
engines=["github"])
|
||||
```
|
||||
|
||||
For more information on tools, see [this page](../modules/agents/tools.md)
|
||||
Note that we could _optionally_ pass custom engines to use.
|
||||
|
||||
If you want to obtain results with metadata as *json* you can use:
|
||||
```python
|
||||
tools = load_tools(["searx-search-results-json"],
|
||||
searx_host="http://localhost:8888",
|
||||
num_results=5)
|
||||
```
|
||||
|
||||
For more information on tools, see [this page](../modules/agents/tools/getting_started.md)
|
||||
|
||||
@@ -17,7 +17,7 @@ There exists a SerpAPI utility which wraps this API. To import this utility:
|
||||
from langchain.utilities import SerpAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/serpapi.ipynb).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/serpapi.ipynb).
|
||||
|
||||
### Tool
|
||||
|
||||
@@ -28,4 +28,4 @@ from langchain.agents import load_tools
|
||||
tools = load_tools(["serpapi"])
|
||||
```
|
||||
|
||||
For more information on this, see [this page](../modules/agents/tools.md)
|
||||
For more information on this, see [this page](../modules/agents/tools/getting_started.md)
|
||||
|
||||
625
docs/ecosystem/wandb_tracking.ipynb
Normal file
625
docs/ecosystem/wandb_tracking.ipynb
Normal file
@@ -0,0 +1,625 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Weights & Biases\n",
|
||||
"\n",
|
||||
"This notebook goes over how to track your LangChain experiments into one centralized Weights and Biases dashboard. To learn more about prompt engineering and the callback please refer to this Report which explains both alongside the resultant dashboards you can expect to see.\n",
|
||||
"\n",
|
||||
"Run in Colab: https://colab.research.google.com/drive/1DXH4beT4HFaRKy_Vm4PoxhXVDRf7Ym8L?usp=sharing\n",
|
||||
"\n",
|
||||
"View Report: https://wandb.ai/a-sh0ts/langchain_callback_demo/reports/Prompt-Engineering-LLMs-with-LangChain-and-W-B--VmlldzozNjk1NTUw#👋-how-to-build-a-callback-in-langchain-for-better-prompt-engineering"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install wandb\n",
|
||||
"!pip install pandas\n",
|
||||
"!pip install textstat\n",
|
||||
"!pip install spacy\n",
|
||||
"!python -m spacy download en_core_web_sm"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"id": "T1bSmKd6V2If"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"os.environ[\"WANDB_API_KEY\"] = \"\"\n",
|
||||
"# os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
|
||||
"# os.environ[\"SERPAPI_API_KEY\"] = \"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"id": "8WAGnTWpUUnD"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from datetime import datetime\n",
|
||||
"from langchain.callbacks import WandbCallbackHandler, StdOutCallbackHandler\n",
|
||||
"from langchain.callbacks.base import CallbackManager\n",
|
||||
"from langchain.llms import OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"```\n",
|
||||
"Callback Handler that logs to Weights and Biases.\n",
|
||||
"\n",
|
||||
"Parameters:\n",
|
||||
" job_type (str): The type of job.\n",
|
||||
" project (str): The project to log to.\n",
|
||||
" entity (str): The entity to log to.\n",
|
||||
" tags (list): The tags to log.\n",
|
||||
" group (str): The group to log to.\n",
|
||||
" name (str): The name of the run.\n",
|
||||
" notes (str): The notes to log.\n",
|
||||
" visualize (bool): Whether to visualize the run.\n",
|
||||
" complexity_metrics (bool): Whether to log complexity metrics.\n",
|
||||
" stream_logs (bool): Whether to stream callback actions to W&B\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "cxBFfZR8d9FC"
|
||||
},
|
||||
"source": [
|
||||
"```\n",
|
||||
"Default values for WandbCallbackHandler(...)\n",
|
||||
"\n",
|
||||
"visualize: bool = False,\n",
|
||||
"complexity_metrics: bool = False,\n",
|
||||
"stream_logs: bool = False,\n",
|
||||
"```\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"NOTE: For beta workflows we have made the default analysis based on textstat and the visualizations based on spacy"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {
|
||||
"id": "KAz8weWuUeXF"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mharrison-chase\u001b[0m. Use \u001b[1m`wandb login --relogin`\u001b[0m to force relogin\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Tracking run with wandb version 0.14.0"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150408-e47j1914</code>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target=\"_blank\">llm</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/run' target=\"_blank\">docs</a>)<br/>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
" View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo</a>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
" View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914</a>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m The wandb callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"\"\"\"Main function.\n",
|
||||
"\n",
|
||||
"This function is used to try the callback handler.\n",
|
||||
"Scenarios:\n",
|
||||
"1. OpenAI LLM\n",
|
||||
"2. Chain with multiple SubChains on multiple generations\n",
|
||||
"3. Agent with Tools\n",
|
||||
"\"\"\"\n",
|
||||
"session_group = datetime.now().strftime(\"%m.%d.%Y_%H.%M.%S\")\n",
|
||||
"wandb_callback = WandbCallbackHandler(\n",
|
||||
" job_type=\"inference\",\n",
|
||||
" project=\"langchain_callback_demo\",\n",
|
||||
" group=f\"minimal_{session_group}\",\n",
|
||||
" name=\"llm\",\n",
|
||||
" tags=[\"test\"],\n",
|
||||
")\n",
|
||||
"manager = CallbackManager([StdOutCallbackHandler(), wandb_callback])\n",
|
||||
"llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"id": "Q-65jwrDeK6w"
|
||||
},
|
||||
"source": [
|
||||
"\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"# Defaults for WandbCallbackHandler.flush_tracker(...)\n",
|
||||
"\n",
|
||||
"reset: bool = True,\n",
|
||||
"finish: bool = False,\n",
|
||||
"```\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `flush_tracker` function is used to log LangChain sessions to Weights & Biases. It takes in the LangChain module or agent, and logs at minimum the prompts and generations alongside the serialized form of the LangChain module to the specified Weights & Biases project. By default we reset the session as opposed to concluding the session outright."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {
|
||||
"id": "o_VmneyIUyx8"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Waiting for W&B process to finish... <strong style=\"color:green\">(success).</strong>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
" View run <strong style=\"color:#cdcd00\">llm</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914</a><br/>Synced 5 W&B file(s), 2 media file(s), 5 artifact file(s) and 0 other file(s)"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Find logs at: <code>./wandb/run-20230318_150408-e47j1914/logs</code>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "0d7b4307ccdb450ea631497174fca2d1",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"VBox(children=(Label(value='Waiting for wandb.init()...\\r'), FloatProgress(value=0.016745895149999985, max=1.0…"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Tracking run with wandb version 0.14.0"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150534-jyxma7hu</code>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target=\"_blank\">simple_sequential</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/run' target=\"_blank\">docs</a>)<br/>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
" View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo</a>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
" View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu</a>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# SCENARIO 1 - LLM\n",
|
||||
"llm_result = llm.generate([\"Tell me a joke\", \"Tell me a poem\"] * 3)\n",
|
||||
"wandb_callback.flush_tracker(llm, name=\"simple_sequential\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {
|
||||
"id": "trxslyb1U28Y"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {
|
||||
"id": "uauQk10SUzF6"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Waiting for W&B process to finish... <strong style=\"color:green\">(success).</strong>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
" View run <strong style=\"color:#cdcd00\">simple_sequential</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu</a><br/>Synced 4 W&B file(s), 2 media file(s), 6 artifact file(s) and 0 other file(s)"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Find logs at: <code>./wandb/run-20230318_150534-jyxma7hu/logs</code>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "dbdbf28fb8ed40a3a60218d2e6d1a987",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
"VBox(children=(Label(value='Waiting for wandb.init()...\\r'), FloatProgress(value=0.016736786816666675, max=1.0…"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Tracking run with wandb version 0.14.0"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150550-wzy59zjq</code>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target=\"_blank\">agent</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/run' target=\"_blank\">docs</a>)<br/>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
" View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo</a>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
" View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq</a>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# SCENARIO 2 - Chain\n",
|
||||
"template = \"\"\"You are a playwright. Given the title of play, it is your job to write a synopsis for that title.\n",
|
||||
"Title: {title}\n",
|
||||
"Playwright: This is a synopsis for the above play:\"\"\"\n",
|
||||
"prompt_template = PromptTemplate(input_variables=[\"title\"], template=template)\n",
|
||||
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)\n",
|
||||
"\n",
|
||||
"test_prompts = [\n",
|
||||
" {\n",
|
||||
" \"title\": \"documentary about good video games that push the boundary of game design\"\n",
|
||||
" },\n",
|
||||
" {\"title\": \"cocaine bear vs heroin wolf\"},\n",
|
||||
" {\"title\": \"the best in class mlops tooling\"},\n",
|
||||
"]\n",
|
||||
"synopsis_chain.apply(test_prompts)\n",
|
||||
"wandb_callback.flush_tracker(synopsis_chain, name=\"agent\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {
|
||||
"id": "_jN73xcPVEpI"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import initialize_agent, load_tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {
|
||||
"id": "Gpq4rk6VT9cu"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
|
||||
"Action: Search\n",
|
||||
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mDiCaprio had a steady girlfriend in Camila Morrone. He had been with the model turned actress for nearly five years, as they were first said to be dating at the end of 2017. And the now 26-year-old Morrone is no stranger to Hollywood.\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate her age raised to the 0.43 power.\n",
|
||||
"Action: Calculator\n",
|
||||
"Action Input: 26^0.43\u001b[0m\n",
|
||||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.059182145592686\n",
|
||||
"\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
|
||||
"Final Answer: Leo DiCaprio's girlfriend is Camila Morrone and her current age raised to the 0.43 power is 4.059182145592686.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Waiting for W&B process to finish... <strong style=\"color:green\">(success).</strong>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
" View run <strong style=\"color:#cdcd00\">agent</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq</a><br/>Synced 5 W&B file(s), 2 media file(s), 7 artifact file(s) and 0 other file(s)"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/html": [
|
||||
"Find logs at: <code>./wandb/run-20230318_150550-wzy59zjq/logs</code>"
|
||||
],
|
||||
"text/plain": [
|
||||
"<IPython.core.display.HTML object>"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# SCENARIO 3 - Agent with Tools\n",
|
||||
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callback_manager=manager)\n",
|
||||
"agent = initialize_agent(\n",
|
||||
" tools,\n",
|
||||
" llm,\n",
|
||||
" agent=\"zero-shot-react-description\",\n",
|
||||
" callback_manager=manager,\n",
|
||||
" verbose=True,\n",
|
||||
")\n",
|
||||
"agent.run(\n",
|
||||
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
|
||||
")\n",
|
||||
"wandb_callback.flush_tracker(agent, reset=False, finish=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
||||
@@ -30,4 +30,4 @@ To import this vectorstore:
|
||||
from langchain.vectorstores import Weaviate
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)
|
||||
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/indexes/vectorstores/getting_started.ipynb)
|
||||
|
||||
@@ -20,7 +20,7 @@ There exists a WolframAlphaAPIWrapper utility which wraps this API. To import th
|
||||
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
|
||||
```
|
||||
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/wolfram_alpha.ipynb).
|
||||
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/wolfram_alpha.ipynb).
|
||||
|
||||
### Tool
|
||||
|
||||
@@ -31,4 +31,4 @@ from langchain.agents import load_tools
|
||||
tools = load_tools(["wolfram-alpha"])
|
||||
```
|
||||
|
||||
For more information on this, see [this page](../modules/agents/tools.md)
|
||||
For more information on this, see [this page](../modules/agents/tools/getting_started.md)
|
||||
|
||||
@@ -158,14 +158,14 @@ Open Source
|
||||
|
||||
---
|
||||
|
||||
.. link-button:: https://github.com/jerryjliu/gpt_index
|
||||
.. link-button:: https://github.com/jerryjliu/llama_index
|
||||
:type: url
|
||||
:text: GPT Index
|
||||
:text: LlamaIndex
|
||||
:classes: stretched-link btn-lg
|
||||
|
||||
+++
|
||||
|
||||
GPT Index is a project consisting of a set of data structures that are created using GPT-3 and can be traversed using GPT-3 in order to answer queries.
|
||||
LlamaIndex (formerly GPT Index) is a project consisting of a set of data structures that are created using GPT-3 and can be traversed using GPT-3 in order to answer queries.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ os.environ["OPENAI_API_KEY"] = "..."
|
||||
```
|
||||
|
||||
|
||||
## Building a Language Model Application
|
||||
## Building a Language Model Application: LLMs
|
||||
|
||||
Now that we have installed LangChain and set up our environment, we can start building our language model application.
|
||||
|
||||
@@ -74,7 +74,7 @@ print(llm(text))
|
||||
Feetful of Fun
|
||||
```
|
||||
|
||||
For more details on how to use LLMs within LangChain, see the [LLM getting started guide](../modules/llms/getting_started.ipynb).
|
||||
For more details on how to use LLMs within LangChain, see the [LLM getting started guide](../modules/models/llms/getting_started.ipynb).
|
||||
`````
|
||||
|
||||
|
||||
@@ -111,7 +111,7 @@ What is a good name for a company that makes colorful socks?
|
||||
```
|
||||
|
||||
|
||||
[For more details, check out the getting started guide for prompts.](../modules/prompts/getting_started.ipynb)
|
||||
[For more details, check out the getting started guide for prompts.](../modules/prompts/chat_prompt_template.ipynb)
|
||||
|
||||
`````
|
||||
|
||||
@@ -160,7 +160,7 @@ This is one of the simpler types of chains, but understanding how it works will
|
||||
`````
|
||||
|
||||
|
||||
`````{dropdown} Agents: Dynamically call chains based on user input
|
||||
`````{dropdown} Agents: Dynamically Call Chains Based on User Input
|
||||
|
||||
So far the chains we've looked at run in a predetermined order.
|
||||
|
||||
@@ -210,35 +210,31 @@ tools = load_tools(["serpapi", "llm-math"], llm=llm)
|
||||
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
|
||||
|
||||
# Now let's test it out!
|
||||
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
|
||||
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")
|
||||
```
|
||||
|
||||
```pycon
|
||||
Entering new AgentExecutor chain...
|
||||
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
|
||||
> Entering new AgentExecutor chain...
|
||||
I need to find the temperature first, then use the calculator to raise it to the .023 power.
|
||||
Action: Search
|
||||
Action Input: "Olivia Wilde boyfriend"
|
||||
Observation: Jason Sudeikis
|
||||
Thought: I need to find out Jason Sudeikis' age
|
||||
Action: Search
|
||||
Action Input: "Jason Sudeikis age"
|
||||
Observation: 47 years
|
||||
Thought: I need to calculate 47 raised to the 0.23 power
|
||||
Action Input: "High temperature in SF yesterday"
|
||||
Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...
|
||||
Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.
|
||||
Action: Calculator
|
||||
Action Input: 47^0.23
|
||||
Observation: Answer: 2.4242784855673896
|
||||
Action Input: 57^.023
|
||||
Observation: Answer: 1.0974509573251117
|
||||
|
||||
Thought: I now know the final answer
|
||||
Final Answer: Jason Sudeikis, Olivia Wilde's boyfriend, is 47 years old and his age raised to the 0.23 power is 2.4242784855673896.
|
||||
> Finished AgentExecutor chain.
|
||||
"Jason Sudeikis, Olivia Wilde's boyfriend, is 47 years old and his age raised to the 0.23 power is 2.4242784855673896."
|
||||
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
|
||||
|
||||
> Finished chain.
|
||||
```
|
||||
|
||||
|
||||
`````
|
||||
|
||||
|
||||
`````{dropdown} Memory: Add state to chains and agents
|
||||
`````{dropdown} Memory: Add State to Chains and Agents
|
||||
|
||||
So far, all the chains and agents we've gone through have been stateless. But often, you may want a chain or agent to have some concept of "memory" so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of "short-term memory". On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of "long-term memory". For more concrete ideas on the latter, see this [awesome paper](https://memprompt.com/).
|
||||
|
||||
@@ -287,4 +283,218 @@ AI:
|
||||
|
||||
> Finished chain.
|
||||
" That's great! What would you like to talk about?"
|
||||
```
|
||||
```
|
||||
`````
|
||||
|
||||
## Building a Language Model Application: Chat Models
|
||||
|
||||
Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
|
||||
|
||||
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
|
||||
|
||||
|
||||
`````{dropdown} Get Message Completions from a Chat Model
|
||||
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`.
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.schema import (
|
||||
AIMessage,
|
||||
HumanMessage,
|
||||
SystemMessage
|
||||
)
|
||||
|
||||
chat = ChatOpenAI(temperature=0)
|
||||
```
|
||||
|
||||
You can get completions by passing in a single message.
|
||||
|
||||
```python
|
||||
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
|
||||
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
|
||||
```
|
||||
|
||||
You can also pass in multiple messages for OpenAI's gpt-3.5-turbo and gpt-4 models.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
SystemMessage(content="You are a helpful assistant that translates English to French."),
|
||||
HumanMessage(content="Translate this sentence from English to French. I love programming.")
|
||||
]
|
||||
chat(messages)
|
||||
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
|
||||
```
|
||||
|
||||
You can go one step further and generate completions for multiple sets of messages using `generate`. This returns an `LLMResult` with an additional `message` parameter:
|
||||
```python
|
||||
batch_messages = [
|
||||
[
|
||||
SystemMessage(content="You are a helpful assistant that translates English to French."),
|
||||
HumanMessage(content="Translate this sentence from English to French. I love programming.")
|
||||
],
|
||||
[
|
||||
SystemMessage(content="You are a helpful assistant that translates English to French."),
|
||||
HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.")
|
||||
],
|
||||
]
|
||||
result = chat.generate(batch_messages)
|
||||
result
|
||||
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}})
|
||||
```
|
||||
|
||||
You can recover things like token usage from this LLMResult:
|
||||
```
|
||||
result.llm_output['token_usage']
|
||||
# -> {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}
|
||||
```
|
||||
`````
|
||||
|
||||
`````{dropdown} Chat Prompt Templates
|
||||
Similar to LLMs, you can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplate`s. You can use `ChatPromptTemplate`'s `format_prompt` -- this returns a `PromptValue`, which you can convert to a string or `Message` object, depending on whether you want to use the formatted value as input to an llm or chat model.
|
||||
|
||||
For convience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.prompts.chat import (
|
||||
ChatPromptTemplate,
|
||||
SystemMessagePromptTemplate,
|
||||
HumanMessagePromptTemplate,
|
||||
)
|
||||
|
||||
chat = ChatOpenAI(temperature=0)
|
||||
|
||||
template="You are a helpful assistant that translates {input_language} to {output_language}."
|
||||
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
|
||||
human_template="{text}"
|
||||
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
|
||||
|
||||
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
|
||||
|
||||
# get a chat completion from the formatted messages
|
||||
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
|
||||
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
|
||||
```
|
||||
`````
|
||||
|
||||
`````{dropdown} Chains with Chat Models
|
||||
The `LLMChain` discussed in the above section can be used with chat models as well:
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain import LLMChain
|
||||
from langchain.prompts.chat import (
|
||||
ChatPromptTemplate,
|
||||
SystemMessagePromptTemplate,
|
||||
HumanMessagePromptTemplate,
|
||||
)
|
||||
|
||||
chat = ChatOpenAI(temperature=0)
|
||||
|
||||
template="You are a helpful assistant that translates {input_language} to {output_language}."
|
||||
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
|
||||
human_template="{text}"
|
||||
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
|
||||
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
|
||||
|
||||
chain = LLMChain(llm=chat, prompt=chat_prompt)
|
||||
chain.run(input_language="English", output_language="French", text="I love programming.")
|
||||
# -> "J'aime programmer."
|
||||
```
|
||||
`````
|
||||
|
||||
`````{dropdown} Agents with Chat Models
|
||||
Agents can also be used with chat models, you can initialize one using `"chat-zero-shot-react-description"` as the agent type.
|
||||
|
||||
```python
|
||||
from langchain.agents import load_tools
|
||||
from langchain.agents import initialize_agent
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.llms import OpenAI
|
||||
|
||||
# First, let's load the language model we're going to use to control the agent.
|
||||
chat = ChatOpenAI(temperature=0)
|
||||
|
||||
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
|
||||
llm = OpenAI(temperature=0)
|
||||
tools = load_tools(["serpapi", "llm-math"], llm=llm)
|
||||
|
||||
|
||||
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
|
||||
agent = initialize_agent(tools, chat, agent="chat-zero-shot-react-description", verbose=True)
|
||||
|
||||
# Now let's test it out!
|
||||
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
|
||||
```
|
||||
|
||||
```pycon
|
||||
|
||||
> Entering new AgentExecutor chain...
|
||||
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
|
||||
Action:
|
||||
{
|
||||
"action": "Search",
|
||||
"action_input": "Olivia Wilde boyfriend"
|
||||
}
|
||||
|
||||
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
|
||||
Thought:I need to use a search engine to find Harry Styles' current age.
|
||||
Action:
|
||||
{
|
||||
"action": "Search",
|
||||
"action_input": "Harry Styles age"
|
||||
}
|
||||
|
||||
Observation: 29 years
|
||||
Thought:Now I need to calculate 29 raised to the 0.23 power.
|
||||
Action:
|
||||
{
|
||||
"action": "Calculator",
|
||||
"action_input": "29^0.23"
|
||||
}
|
||||
|
||||
Observation: Answer: 2.169459462491557
|
||||
|
||||
Thought:I now know the final answer.
|
||||
Final Answer: 2.169459462491557
|
||||
|
||||
> Finished chain.
|
||||
'2.169459462491557'
|
||||
```
|
||||
`````
|
||||
|
||||
`````{dropdown} Memory: Add State to Chains and Agents
|
||||
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
|
||||
|
||||
```python
|
||||
from langchain.prompts import (
|
||||
ChatPromptTemplate,
|
||||
MessagesPlaceholder,
|
||||
SystemMessagePromptTemplate,
|
||||
HumanMessagePromptTemplate
|
||||
)
|
||||
from langchain.chains import ConversationChain
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
from langchain.memory import ConversationBufferMemory
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages([
|
||||
SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
|
||||
MessagesPlaceholder(variable_name="history"),
|
||||
HumanMessagePromptTemplate.from_template("{input}")
|
||||
])
|
||||
|
||||
llm = ChatOpenAI(temperature=0)
|
||||
memory = ConversationBufferMemory(return_messages=True)
|
||||
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
|
||||
|
||||
conversation.predict(input="Hi there!")
|
||||
# -> 'Hello! How can I assist you today?'
|
||||
|
||||
|
||||
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
|
||||
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"
|
||||
|
||||
conversation.predict(input="Tell me about yourself.")
|
||||
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"
|
||||
```
|
||||
`````
|
||||
|
||||
@@ -32,7 +32,7 @@ This induces the to model to think about what action to take, then take it.
|
||||
Resources:
|
||||
|
||||
- [Paper](https://arxiv.org/pdf/2210.03629.pdf)
|
||||
- [LangChain Example](./modules/agents/implementations/react.ipynb)
|
||||
- [LangChain Example](modules/agents/agents/examples/react.ipynb)
|
||||
|
||||
## Self-ask
|
||||
|
||||
@@ -42,7 +42,7 @@ In this method, the model explicitly asks itself follow-up questions, which are
|
||||
Resources:
|
||||
|
||||
- [Paper](https://ofir.io/self-ask.pdf)
|
||||
- [LangChain Example](./modules/agents/implementations/self_ask_with_search.ipynb)
|
||||
- [LangChain Example](modules/agents/agents/examples/self_ask_with_search.ipynb)
|
||||
|
||||
## Prompt Chaining
|
||||
|
||||
|
||||
@@ -1,28 +1,14 @@
|
||||
Welcome to LangChain
|
||||
==========================
|
||||
|
||||
Large language models (LLMs) are emerging as a transformative technology, enabling
|
||||
developers to build applications that they previously could not.
|
||||
But using these LLMs in isolation is often not enough to
|
||||
create a truly powerful app - the real power comes when you are able to
|
||||
combine them with other sources of computation or knowledge.
|
||||
LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:
|
||||
|
||||
This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:
|
||||
- *Be data-aware*: connect a language model to other sources of data
|
||||
- *Be agentic*: allow a language model to interact with its environment
|
||||
|
||||
**❓ Question Answering over specific documents**
|
||||
The LangChain framework is designed with the above principles in mind.
|
||||
|
||||
- `Documentation <./use_cases/question_answering.html>`_
|
||||
- End-to-end Example: `Question Answering over Notion Database <https://github.com/hwchase17/notion-qa>`_
|
||||
|
||||
**💬 Chatbots**
|
||||
|
||||
- `Documentation <./use_cases/chatbots.html>`_
|
||||
- End-to-end Example: `Chat-LangChain <https://github.com/hwchase17/chat-langchain>`_
|
||||
|
||||
**🤖 Agents**
|
||||
|
||||
- `Documentation <./use_cases/agents.html>`_
|
||||
- End-to-end Example: `GPT+WolframAlpha <https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain>`_
|
||||
This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see `here <https://docs.langchain.com/docs/>`_. For the JavaScript documentation, see `here <https://js.langchain.com/docs/>`_.
|
||||
|
||||
Getting Started
|
||||
----------------
|
||||
@@ -46,25 +32,18 @@ There are several main modules that LangChain provides support for.
|
||||
For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.
|
||||
These modules are, in increasing order of complexity:
|
||||
|
||||
- `Models <./modules/models.html>`_: The various model types and model integrations LangChain supports.
|
||||
|
||||
- `Prompts <./modules/prompts.html>`_: This includes prompt management, prompt optimization, and prompt serialization.
|
||||
|
||||
- `LLMs <./modules/llms.html>`_: This includes a generic interface for all LLMs, and common utilities for working with LLMs.
|
||||
|
||||
- `Document Loaders <./modules/document_loaders.html>`_: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.
|
||||
|
||||
- `Utils <./modules/utils.html>`_: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.
|
||||
|
||||
- `Chains <./modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
|
||||
- `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
|
||||
|
||||
- `Indexes <./modules/indexes.html>`_: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.
|
||||
|
||||
- `Chains <./modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
|
||||
|
||||
- `Agents <./modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
|
||||
|
||||
- `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
|
||||
|
||||
- `Chat <./modules/chat.html>`_: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
@@ -72,40 +51,34 @@ These modules are, in increasing order of complexity:
|
||||
:name: modules
|
||||
:hidden:
|
||||
|
||||
./modules/prompts.md
|
||||
./modules/llms.md
|
||||
./modules/document_loaders.md
|
||||
./modules/utils.md
|
||||
./modules/models.rst
|
||||
./modules/prompts.rst
|
||||
./modules/indexes.md
|
||||
./modules/memory.md
|
||||
./modules/chains.md
|
||||
./modules/agents.md
|
||||
./modules/memory.md
|
||||
./modules/chat.md
|
||||
|
||||
Use Cases
|
||||
----------
|
||||
|
||||
The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.
|
||||
|
||||
- `Agents <./use_cases/agents.html>`_: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.
|
||||
- `Personal Assistants <./use_cases/personal_assistants.html>`_: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
|
||||
|
||||
- `Question Answering <./use_cases/question_answering.html>`_: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
|
||||
|
||||
- `Chatbots <./use_cases/chatbots.html>`_: Since language models are good at producing text, that makes them ideal for creating chatbots.
|
||||
|
||||
- `Data Augmented Generation <./use_cases/combine_docs.html>`_: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.
|
||||
- `Querying Tabular Data <./use_cases/tabular.html>`_: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.
|
||||
|
||||
- `Question Answering <./use_cases/question_answering.html>`_: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.
|
||||
- `Interacting with APIs <./use_cases/apis.html>`_: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.
|
||||
|
||||
- `Extraction <./use_cases/extraction.html>`_: Extract structured information from text.
|
||||
|
||||
- `Summarization <./use_cases/summarization.html>`_: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.
|
||||
|
||||
- `Querying Tabular Data <./use_cases/tabular.html>`_: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.
|
||||
|
||||
- `Evaluation <./use_cases/evaluation.html>`_: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
|
||||
|
||||
- `Generate similar examples <./use_cases/generate_examples.html>`_: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.
|
||||
|
||||
- `Compare models <./use_cases/model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
|
||||
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
@@ -113,15 +86,14 @@ The above modules can be used in a variety of ways. LangChain also provides guid
|
||||
:name: use_cases
|
||||
:hidden:
|
||||
|
||||
./use_cases/agents.md
|
||||
./use_cases/chatbots.md
|
||||
./use_cases/generate_examples.ipynb
|
||||
./use_cases/combine_docs.md
|
||||
./use_cases/personal_assistants.md
|
||||
./use_cases/question_answering.md
|
||||
./use_cases/summarization.md
|
||||
./use_cases/chatbots.md
|
||||
./use_cases/tabular.rst
|
||||
./use_cases/apis.md
|
||||
./use_cases/summarization.md
|
||||
./use_cases/extraction.md
|
||||
./use_cases/evaluation.rst
|
||||
./use_cases/model_laboratory.ipynb
|
||||
|
||||
|
||||
Reference Docs
|
||||
@@ -172,10 +144,12 @@ Additional collection of resources we think may be useful as you develop your ap
|
||||
|
||||
- `Deployments <./deployments.html>`_: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
|
||||
|
||||
- `Discord <https://discord.gg/6adMQxSpJS>`_: Join us on our Discord to discuss all things LangChain!
|
||||
|
||||
- `Tracing <./tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
|
||||
|
||||
- `Model Laboratory <./model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
|
||||
|
||||
- `Discord <https://discord.gg/6adMQxSpJS>`_: Join us on our Discord to discuss all things LangChain!
|
||||
|
||||
- `Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>`_: As you move your LangChains into production, we'd love to offer more comprehensive support. Please fill out this form and we'll set up a dedicated support Slack channel.
|
||||
|
||||
|
||||
@@ -190,5 +164,6 @@ Additional collection of resources we think may be useful as you develop your ap
|
||||
./gallery.rst
|
||||
./deployments.md
|
||||
./tracing.md
|
||||
./use_cases/model_laboratory.ipynb
|
||||
Discord <https://discord.gg/6adMQxSpJS>
|
||||
Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>
|
||||
|
||||
@@ -1,30 +1,52 @@
|
||||
Agents
|
||||
==========================
|
||||
|
||||
.. note::
|
||||
`Conceptual Guide <https://docs.langchain.com/docs/components/agents>`_
|
||||
|
||||
|
||||
Some applications will require not just a predetermined chain of calls to LLMs/other tools,
|
||||
but potentially an unknown chain that depends on the user's input.
|
||||
In these types of chains, there is a “agent” which has access to a suite of tools.
|
||||
Depending on the user input, the agent can then decide which, if any, of these tools to call.
|
||||
|
||||
The following sections of documentation are provided:
|
||||
|
||||
- `Getting Started <./agents/getting_started.html>`_: A notebook to help you get started working with agents as quickly as possible.
|
||||
|
||||
- `Key Concepts <./agents/key_concepts.html>`_: A conceptual guide going over the various concepts related to agents.
|
||||
|
||||
- `How-To Guides <./agents/how_to_guides.html>`_: A collection of how-to guides. These highlight how to integrate various types of tools, how to work with different types of agents, and how to customize agents.
|
||||
|
||||
- `Reference <../reference/modules/agents.html>`_: API reference documentation for all Agent classes.
|
||||
|
||||
|
||||
In this section of documentation, we first start with a Getting Started notebook to over over how to use all things related to agents in an end-to-end manner.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Agents
|
||||
:name: Agents
|
||||
:hidden:
|
||||
|
||||
./agents/getting_started.ipynb
|
||||
./agents/key_concepts.md
|
||||
./agents/how_to_guides.rst
|
||||
Reference<../reference/modules/agents.rst>
|
||||
|
||||
|
||||
We then split the documentation into the following sections:
|
||||
|
||||
**Tools**
|
||||
|
||||
An overview of the various tools LangChain supports.
|
||||
|
||||
|
||||
**Agents**
|
||||
|
||||
An overview of the different agent types.
|
||||
|
||||
|
||||
**Toolkits**
|
||||
|
||||
An overview of toolkits, and examples of the different ones LangChain supports.
|
||||
|
||||
|
||||
**Agent Executor**
|
||||
|
||||
An overview of the Agent Executor class and examples of how to use it.
|
||||
|
||||
Go Deeper
|
||||
---------
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
./agents/tools.rst
|
||||
./agents/agents.rst
|
||||
./agents/toolkits.rst
|
||||
./agents/agent_executors.rst
|
||||
|
||||
17
docs/modules/agents/agent_executors.rst
Normal file
17
docs/modules/agents/agent_executors.rst
Normal file
@@ -0,0 +1,17 @@
|
||||
Agent Executors
|
||||
===============
|
||||
|
||||
.. note::
|
||||
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/agent-executor>`_
|
||||
|
||||
Agent executors take an agent and tools and use the agent to decide which tools to call and in what order.
|
||||
|
||||
In this part of the documentation we cover other related functionality to agent executors
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./agent_executors/examples/*
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "68b24990",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Agents and Vectorstores\n",
|
||||
"# How to combine agents and vectorstores\n",
|
||||
"\n",
|
||||
"This notebook covers how to combine agents and vectorstores. The use case for this is that you've ingested your data into a vectorstore and want to interact with it in an agentic manner.\n",
|
||||
"\n",
|
||||
@@ -22,7 +22,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"execution_count": 16,
|
||||
"id": "2e87c10a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -30,13 +30,30 @@
|
||||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.text_splitter import CharacterTextSplitter\n",
|
||||
"from langchain import OpenAI, VectorDBQA\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"llm = OpenAI(temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 37,
|
||||
"execution_count": 17,
|
||||
"id": "0b7b772b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pathlib import Path\n",
|
||||
"relevant_parts = []\n",
|
||||
"for p in Path(\".\").absolute().parts:\n",
|
||||
" relevant_parts.append(p)\n",
|
||||
" if relevant_parts[-3:] == [\"langchain\", \"docs\", \"modules\"]:\n",
|
||||
" break\n",
|
||||
"doc_path = str(Path(*relevant_parts) / \"state_of_the_union.txt\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "f2675861",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -51,7 +68,7 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"loader = TextLoader('../../state_of_the_union.txt')\n",
|
||||
"loader = TextLoader(doc_path)\n",
|
||||
"documents = loader.load()\n",
|
||||
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
|
||||
"texts = text_splitter.split_documents(documents)\n",
|
||||
@@ -62,17 +79,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 38,
|
||||
"execution_count": 4,
|
||||
"id": "bc5403d4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"state_of_union = VectorDBQA.from_chain_type(llm=llm, chain_type=\"stuff\", vectorstore=docsearch)"
|
||||
"state_of_union = RetrievalQA.from_chain_type(llm=llm, chain_type=\"stuff\", retriever=docsearch.as_retriever())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 39,
|
||||
"execution_count": 5,
|
||||
"id": "1431cded",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -82,7 +99,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 40,
|
||||
"execution_count": 6,
|
||||
"id": "915d3ff3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -92,7 +109,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 41,
|
||||
"execution_count": 7,
|
||||
"id": "96a2edf8",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -109,7 +126,7 @@
|
||||
"docs = loader.load()\n",
|
||||
"ruff_texts = text_splitter.split_documents(docs)\n",
|
||||
"ruff_db = Chroma.from_documents(ruff_texts, embeddings, collection_name=\"ruff\")\n",
|
||||
"ruff = VectorDBQA.from_chain_type(llm=llm, chain_type=\"stuff\", vectorstore=ruff_db)"
|
||||
"ruff = RetrievalQA.from_chain_type(llm=llm, chain_type=\"stuff\", retriever=ruff_db.as_retriever())"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -264,9 +281,9 @@
|
||||
"id": "9161ba91",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can also set `return_direct=True` if you intend to use the agent as a router and just want to directly return the result of the VectorDBQaChain.\n",
|
||||
"You can also set `return_direct=True` if you intend to use the agent as a router and just want to directly return the result of the RetrievalQAChain.\n",
|
||||
"\n",
|
||||
"Notice that in the above examples the agent did some extra work after querying the VectorDBQAChain. You can avoid that and just return the result directly."
|
||||
"Notice that in the above examples the agent did some extra work after querying the RetrievalQAChain. You can avoid that and just return the result directly."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "6fb92deb-d89e-439b-855d-c7f2607d794b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Async API for Agent\n",
|
||||
"# How to use the async API for Agents\n",
|
||||
"\n",
|
||||
"LangChain provides async support for Agents by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
|
||||
"\n",
|
||||
@@ -403,7 +403,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "b253f4d5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatGPT Clone\n",
|
||||
"# How to create ChatGPT Clone\n",
|
||||
"\n",
|
||||
"This chain replicates ChatGPT by combining (1) a specific prompt, and (2) the concept of memory.\n",
|
||||
"\n",
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "5436020b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Intermediate Steps\n",
|
||||
"# How to access intermediate steps\n",
|
||||
"\n",
|
||||
"In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples."
|
||||
]
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "75c041b7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Max Iterations\n",
|
||||
"# How to cap the max number of iterations\n",
|
||||
"\n",
|
||||
"This notebook walks through how to cap an agent at taking a certain number of steps. This can be useful to ensure that they do not go haywire and take too many steps."
|
||||
]
|
||||
@@ -1,12 +1,11 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "fa6802ac",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Adding SharedMemory to an Agent and its Tools\n",
|
||||
"# How to add SharedMemory to an Agent and its Tools\n",
|
||||
"\n",
|
||||
"This notebook goes over adding memory to **both** of an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n",
|
||||
"\n",
|
||||
@@ -260,7 +259,6 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "4ebd8326",
|
||||
"metadata": {},
|
||||
@@ -292,7 +290,6 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "cc3d0aa4",
|
||||
"metadata": {},
|
||||
@@ -493,7 +490,6 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "d07415da",
|
||||
"metadata": {},
|
||||
@@ -544,7 +540,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
35
docs/modules/agents/agents.rst
Normal file
35
docs/modules/agents/agents.rst
Normal file
@@ -0,0 +1,35 @@
|
||||
Agents
|
||||
=============
|
||||
|
||||
.. note::
|
||||
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/agent>`_
|
||||
|
||||
|
||||
In this part of the documentation we cover the different types of agents, disregarding which specific tools they are used with.
|
||||
|
||||
For a high level overview of the different types of agents, see the below documentation.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./agents/agent_types.md
|
||||
|
||||
For documentation on how to create a custom agent, see the below.
|
||||
|
||||
We also have documentation for an in-depth dive into each agent type.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./agents/custom_agent.ipynb
|
||||
|
||||
We also have documentation for an in-depth dive into each agent type.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./agents/examples/*
|
||||
|
||||
@@ -1,12 +1,9 @@
|
||||
# Agents
|
||||
# Agent Types
|
||||
|
||||
Agents use an LLM to determine which actions to take and in what order.
|
||||
An action can either be using a tool and observing its output, or returning a response to the user.
|
||||
For a list of easily loadable tools, see [here](tools.md).
|
||||
Here are the agents available in LangChain.
|
||||
|
||||
For a tutorial on how to load agents, see [here](getting_started.ipynb).
|
||||
|
||||
## `zero-shot-react-description`
|
||||
|
||||
This agent uses the ReAct framework to determine which tool to use
|
||||
1089
docs/modules/agents/agents/custom-convo-agent.ipynb
Normal file
1089
docs/modules/agents/agents/custom-convo-agent.ipynb
Normal file
File diff suppressed because one or more lines are too long
196
docs/modules/agents/agents/custom_agent.ipynb
Normal file
196
docs/modules/agents/agents/custom_agent.ipynb
Normal file
@@ -0,0 +1,196 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ba5f8741",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Custom Agent\n",
|
||||
"\n",
|
||||
"This notebook goes through how to create your own custom agent.\n",
|
||||
"\n",
|
||||
"An agent consists of three parts:\n",
|
||||
" \n",
|
||||
" - Tools: The tools the agent has available to use.\n",
|
||||
" - The agent class itself: this decides which action to take.\n",
|
||||
" \n",
|
||||
" \n",
|
||||
"In this notebook we walk through how to create a custom agent."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "9af9734e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import Tool, AgentExecutor\n",
|
||||
"from langchain import OpenAI, SerpAPIWrapper"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "becda2a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name = \"Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events\",\n",
|
||||
" return_direct=True\n",
|
||||
" )\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "eca724fc",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents.agent import BaseAgent"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "a33e2f7e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import List, Tuple, Any, Union\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish\n",
|
||||
"\n",
|
||||
"class FakeAgent(BaseAgent):\n",
|
||||
" \"\"\"Fake Custom Agent.\"\"\"\n",
|
||||
" \n",
|
||||
" @property\n",
|
||||
" def input_keys(self):\n",
|
||||
" return [\"input\"]\n",
|
||||
" \n",
|
||||
" def plan(\n",
|
||||
" self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n",
|
||||
" ) -> Union[AgentAction, AgentFinish]:\n",
|
||||
" \"\"\"Given input, decided what to do.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" intermediate_steps: Steps the LLM has taken to date,\n",
|
||||
" along with observations\n",
|
||||
" **kwargs: User inputs.\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" Action specifying what tool to use.\n",
|
||||
" \"\"\"\n",
|
||||
" return AgentAction(tool=\"Search\", tool_input=\"foo\", log=\"\")\n",
|
||||
"\n",
|
||||
" async def aplan(\n",
|
||||
" self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n",
|
||||
" ) -> Union[AgentAction, AgentFinish]:\n",
|
||||
" \"\"\"Given input, decided what to do.\n",
|
||||
"\n",
|
||||
" Args:\n",
|
||||
" intermediate_steps: Steps the LLM has taken to date,\n",
|
||||
" along with observations\n",
|
||||
" **kwargs: User inputs.\n",
|
||||
"\n",
|
||||
" Returns:\n",
|
||||
" Action specifying what tool to use.\n",
|
||||
" \"\"\"\n",
|
||||
" return AgentAction(tool=\"Search\", tool_input=\"foo\", log=\"\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "655d72f6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent = FakeAgent()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "490604e9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "653b1617",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\u001b[0m\u001b[36;1m\u001b[1;3mFoo Fighters is an American rock band formed in Seattle in 1994. Foo Fighters was initially formed as a one-man project by former Nirvana drummer Dave Grohl. Following the success of the 1995 eponymous debut album, Grohl recruited a band consisting of Nate Mendel, William Goldsmith, and Pat Smear.\u001b[0m\u001b[32;1m\u001b[1;3m\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Foo Fighters is an American rock band formed in Seattle in 1994. Foo Fighters was initially formed as a one-man project by former Nirvana drummer Dave Grohl. Following the success of the 1995 eponymous debut album, Grohl recruited a band consisting of Nate Mendel, William Goldsmith, and Pat Smear.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.run(\"How many people live in canada as of 2023?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "adefb4c2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
219
docs/modules/agents/agents/custom_llm_agent.ipynb
Normal file
219
docs/modules/agents/agents/custom_llm_agent.ipynb
Normal file
@@ -0,0 +1,219 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ba5f8741",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Custom LLLM Agent\n",
|
||||
"\n",
|
||||
"This notebook goes through how to create your own custom LLM agent.\n",
|
||||
"\n",
|
||||
"An LLM agent consists of three parts:\n",
|
||||
" \n",
|
||||
" - Tools: The tools the agent has available to use.\n",
|
||||
" - LLMChain: The LLMChain that produces the text that is parsed in a certain way to determine which action to take.\n",
|
||||
" - OutputParser: This determines how to parse the LLMOutput into \n",
|
||||
" \n",
|
||||
" \n",
|
||||
"In this notebook we walk through how to create a custom LLM agent."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "9af9734e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import Tool, AgentExecutor\n",
|
||||
"from langchain.agents.agent import LLMAgent, AgentOutputParser\n",
|
||||
"from langchain.prompts import StringPromptTemplate\n",
|
||||
"from langchain import OpenAI, SerpAPIWrapper, LLMChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "becda2a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name = \"Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events\"\n",
|
||||
" )\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "339b1bb8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
|
||||
"\n",
|
||||
"Search: useful for when you need to answer questions about current events\n",
|
||||
"\n",
|
||||
"Use the following format:\n",
|
||||
"\n",
|
||||
"Question: the input question you must answer\n",
|
||||
"Thought: you should always think about what to do\n",
|
||||
"Action: the action to take, should be one of [Search]\n",
|
||||
"Action Input: the input to the action\n",
|
||||
"Observation: the result of the action\n",
|
||||
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
|
||||
"Thought: I now know the final answer\n",
|
||||
"Final Answer: the final answer to the original input question\n",
|
||||
"\n",
|
||||
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Args\"\n",
|
||||
"\n",
|
||||
"Question: {input}\n",
|
||||
"{agent_scratchpad}\"\"\"\n",
|
||||
"\n",
|
||||
"class CustomPromptTemplate(StringPromptTemplate):\n",
|
||||
" \n",
|
||||
" def format(self, **kwargs) -> str:\n",
|
||||
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
|
||||
" thoughts = \"\"\n",
|
||||
" for action, observation in intermediate_steps:\n",
|
||||
" thoughts += action.log\n",
|
||||
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
|
||||
" kwargs[\"agent_scratchpad\"] = thoughts\n",
|
||||
" return template.format(**kwargs)\n",
|
||||
"\n",
|
||||
"prompt = CustomPromptTemplate(input_variables=[\"input\", \"intermediate_steps\"])\n",
|
||||
"\n",
|
||||
"FINAL_ANSWER_ACTION = \"Final Answer:\"\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish\n",
|
||||
"import re\n",
|
||||
"class CustomOutputParser(AgentOutputParser):\n",
|
||||
" \n",
|
||||
" def parse(self, llm_output):\n",
|
||||
" if FINAL_ANSWER_ACTION in llm_output:\n",
|
||||
" return AgentFinish(\n",
|
||||
" return_values={\"output\": llm_output.split(FINAL_ANSWER_ACTION)[-1].strip()},\n",
|
||||
" log=llm_output,\n",
|
||||
" )\n",
|
||||
" # \\s matches against tab/newline/whitespace\n",
|
||||
" regex = r\"Action: (.*?)[\\n]*Action Input:[\\s]*(.*)\"\n",
|
||||
" match = re.search(regex, llm_output, re.DOTALL)\n",
|
||||
" if not match:\n",
|
||||
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
|
||||
" action = match.group(1).strip()\n",
|
||||
" action_input = match.group(2)\n",
|
||||
" return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)\n",
|
||||
"\n",
|
||||
"output_parser = CustomOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "9b1cc2a2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "e4f5092f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tool_names = [tool.name for tool in tools]\n",
|
||||
"agent = LLMAgent(llm_chain=llm_chain, output_parser=output_parser, stop=[\"\\nObservation:\"], allowed_tools=tool_names)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "490604e9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "653b1617",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada in 2023\n",
|
||||
"Action: Search\n",
|
||||
"Action Input: Population of Canada in 2023\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3mThe current population of Canada is 38,644,767 as of Tuesday, March 28, 2023, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||||
"Final Answer: Arrr, Canada be havin' 38,644,767 people livin' there as of 2023!\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Arrr, Canada be havin' 38,644,767 people livin' there as of 2023!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.run(\"How many people live in canada as of 2023?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "adefb4c2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -5,18 +5,18 @@
|
||||
"id": "ba5f8741",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Custom Agent\n",
|
||||
"# Custom MRKL Agent\n",
|
||||
"\n",
|
||||
"This notebook goes through how to create your own custom agent.\n",
|
||||
"This notebook goes through how to create your own custom MRKL agent.\n",
|
||||
"\n",
|
||||
"An agent consists of three parts:\n",
|
||||
"A MRKL agent consists of three parts:\n",
|
||||
" \n",
|
||||
" - Tools: The tools the agent has available to use.\n",
|
||||
" - LLMChain: The LLMChain that produces the text that is parsed in a certain way to determine which action to take.\n",
|
||||
" - The agent class itself: this parses the output of the LLMChain to determin which action to take.\n",
|
||||
" \n",
|
||||
" \n",
|
||||
"In this notebook we walk through two types of custom agents. The first type shows how to create a custom LLMChain, but still use an existing agent class to parse the output. The second shows how to create a custom agent class."
|
||||
"In this notebook we walk through how to create a custom MRKL agent by creating a custom LLMChain."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -310,16 +310,6 @@
|
||||
"agent_executor.run(input=\"How many people live in canada as of 2023?\", language=\"italian\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "90171b2b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Custom Agent Class\n",
|
||||
"\n",
|
||||
"Coming soon."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -12,7 +12,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 1,
|
||||
"id": "4e272b47",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -40,7 +40,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 2,
|
||||
"id": "8078c8f1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -52,16 +52,21 @@
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m\n",
|
||||
"Thought 1: I need to search David Chanoff and find the U.S. Navy admiral he collaborated\n",
|
||||
"with.\n",
|
||||
"Action 1: Search[David Chanoff]\u001b[0m\n",
|
||||
"Observation 1: \u001b[36;1m\u001b[1;3mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n",
|
||||
"Thought 2:\u001b[32;1m\u001b[1;3m The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe.\n",
|
||||
"Action 2: Search[William J. Crowe]\u001b[0m\n",
|
||||
"Observation 2: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n",
|
||||
"Thought 3:\u001b[32;1m\u001b[1;3m The President William J. Crowe served as the ambassador to the United Kingdom under is Bill Clinton.\n",
|
||||
"Action 3: Finish[Bill Clinton]\u001b[0m\n",
|
||||
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
|
||||
"Thought: I need to search David Chanoff and find the U.S. Navy admiral he collaborated with. Then I need to find which President the admiral served under.\n",
|
||||
"\n",
|
||||
"Action: Search[David Chanoff]\n",
|
||||
"\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3m The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe. I need to find which President he served under.\n",
|
||||
"\n",
|
||||
"Action: Search[William J. Crowe]\n",
|
||||
"\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 – October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3m William J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton, so the answer is Bill Clinton.\n",
|
||||
"\n",
|
||||
"Action: Finish[Bill Clinton]\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -70,7 +75,7 @@
|
||||
"'Bill Clinton'"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -79,6 +84,14 @@
|
||||
"question = \"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\"\n",
|
||||
"react.run(question)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "09604a7f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -1,131 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "991b1cc1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Loading from LangChainHub\n",
|
||||
"\n",
|
||||
"This notebook covers how to load agents from [LangChainHub](https://github.com/hwchase17/langchain-hub)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "bd4450a2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"No `_type` key found, defaulting to `prompt`.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
|
||||
"\u001B[32;1m\u001B[1;3m Yes.\n",
|
||||
"Follow up: Who is the reigning men's U.S. Open champion?\u001B[0m\n",
|
||||
"Intermediate answer: \u001B[36;1m\u001B[1;3m2016 · SUI · Stan Wawrinka ; 2017 · ESP · Rafael Nadal ; 2018 · SRB · Novak Djokovic ; 2019 · ESP · Rafael Nadal.\u001B[0m\n",
|
||||
"\u001B[32;1m\u001B[1;3mSo the reigning men's U.S. Open champion is Rafael Nadal.\n",
|
||||
"Follow up: What is Rafael Nadal's hometown?\u001B[0m\n",
|
||||
"Intermediate answer: \u001B[36;1m\u001B[1;3mIn 2016, he once again showed his deep ties to Mallorca and opened the Rafa Nadal Academy in his hometown of Manacor.\u001B[0m\n",
|
||||
"\u001B[32;1m\u001B[1;3mSo the final answer is: Manacor, Mallorca, Spain.\u001B[0m\n",
|
||||
"\n",
|
||||
"\u001B[1m> Finished chain.\u001B[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Manacor, Mallorca, Spain.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain import OpenAI, SerpAPIWrapper\n",
|
||||
"from langchain.agents import initialize_agent, Tool\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name=\"Intermediate Answer\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to ask with search\"\n",
|
||||
" )\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"self_ask_with_search = initialize_agent(tools, llm, agent_path=\"lc://agents/self-ask-with-search/agent.json\", verbose=True)\n",
|
||||
"self_ask_with_search.run(\"What is the hometown of the reigning men's U.S. Open champion?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3aede965",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Pinning Dependencies\n",
|
||||
"\n",
|
||||
"Specific versions of LangChainHub agents can be pinned with the `lc@<ref>://` syntax."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "e679f7b6",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"No `_type` key found, defaulting to `prompt`.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"self_ask_with_search = initialize_agent(tools, llm, agent_path=\"lc@2826ef9e8acdf88465e1e5fc8a7bf59e0f9d0a85://agents/self-ask-with-search/agent.json\", verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9d3d6697",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -1,154 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bfe18e28",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Serialization\n",
|
||||
"\n",
|
||||
"This notebook goes over how to serialize agents. For this notebook, it is important to understand the distinction we draw between `agents` and `tools`. An agent is the LLM powered decision maker that decides which actions to take and in which order. Tools are various instruments (functions) an agent has access to, through which an agent can interact with the outside world. When people generally use agents, they primarily talk about using an agent WITH tools. However, when we talk about serialization of agents, we are talking about the agent by itself. We plan to add support for serializing an agent WITH tools sometime in the future.\n",
|
||||
"\n",
|
||||
"Let's start by creating an agent with tools as we normally do:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "eb729f16",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import load_tools\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
|
||||
"agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0578f566",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's now serialize the agent. To be explicit that we are serializing ONLY the agent, we will call the `save_agent` method."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "dc544de6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent.save_agent('agent.json')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "62dd45bf",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{\r\n",
|
||||
" \"llm_chain\": {\r\n",
|
||||
" \"memory\": null,\r\n",
|
||||
" \"verbose\": false,\r\n",
|
||||
" \"prompt\": {\r\n",
|
||||
" \"input_variables\": [\r\n",
|
||||
" \"input\",\r\n",
|
||||
" \"agent_scratchpad\"\r\n",
|
||||
" ],\r\n",
|
||||
" \"output_parser\": null,\r\n",
|
||||
" \"template\": \"Answer the following questions as best you can. You have access to the following tools:\\n\\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [Search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}\",\r\n",
|
||||
" \"template_format\": \"f-string\",\r\n",
|
||||
" \"validate_template\": true,\r\n",
|
||||
" \"_type\": \"prompt\"\r\n",
|
||||
" },\r\n",
|
||||
" \"llm\": {\r\n",
|
||||
" \"model_name\": \"text-davinci-003\",\r\n",
|
||||
" \"temperature\": 0.0,\r\n",
|
||||
" \"max_tokens\": 256,\r\n",
|
||||
" \"top_p\": 1,\r\n",
|
||||
" \"frequency_penalty\": 0,\r\n",
|
||||
" \"presence_penalty\": 0,\r\n",
|
||||
" \"n\": 1,\r\n",
|
||||
" \"best_of\": 1,\r\n",
|
||||
" \"request_timeout\": null,\r\n",
|
||||
" \"logit_bias\": {},\r\n",
|
||||
" \"_type\": \"openai\"\r\n",
|
||||
" },\r\n",
|
||||
" \"output_key\": \"text\",\r\n",
|
||||
" \"_type\": \"llm_chain\"\r\n",
|
||||
" },\r\n",
|
||||
" \"allowed_tools\": [\r\n",
|
||||
" \"Search\",\r\n",
|
||||
" \"Calculator\"\r\n",
|
||||
" ],\r\n",
|
||||
" \"return_values\": [\r\n",
|
||||
" \"output\"\r\n",
|
||||
" ],\r\n",
|
||||
" \"_type\": \"zero-shot-react-description\"\r\n",
|
||||
"}"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"!cat agent.json"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0eb72510",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can now load the agent back in"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "eb660b76",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent = initialize_agent(tools, llm, agent_path=\"agent.json\", verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "aa624ea5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -1,87 +0,0 @@
|
||||
"""Run NatBot."""
|
||||
import time
|
||||
|
||||
from langchain.chains.natbot.base import NatBotChain
|
||||
from langchain.chains.natbot.crawler import Crawler
|
||||
|
||||
|
||||
def run_cmd(cmd: str, _crawler: Crawler) -> None:
|
||||
"""Run command."""
|
||||
cmd = cmd.split("\n")[0]
|
||||
|
||||
if cmd.startswith("SCROLL UP"):
|
||||
_crawler.scroll("up")
|
||||
elif cmd.startswith("SCROLL DOWN"):
|
||||
_crawler.scroll("down")
|
||||
elif cmd.startswith("CLICK"):
|
||||
commasplit = cmd.split(",")
|
||||
id = commasplit[0].split(" ")[1]
|
||||
_crawler.click(id)
|
||||
elif cmd.startswith("TYPE"):
|
||||
spacesplit = cmd.split(" ")
|
||||
id = spacesplit[1]
|
||||
text_pieces = spacesplit[2:]
|
||||
text = " ".join(text_pieces)
|
||||
# Strip leading and trailing double quotes
|
||||
text = text[1:-1]
|
||||
|
||||
if cmd.startswith("TYPESUBMIT"):
|
||||
text += "\n"
|
||||
_crawler.type(id, text)
|
||||
|
||||
time.sleep(2)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
objective = "Make a reservation for 2 at 7pm at bistro vida in menlo park"
|
||||
print("\nWelcome to natbot! What is your objective?")
|
||||
i = input()
|
||||
if len(i) > 0:
|
||||
objective = i
|
||||
quiet = False
|
||||
nat_bot_chain = NatBotChain.from_default(objective)
|
||||
_crawler = Crawler()
|
||||
_crawler.go_to_page("google.com")
|
||||
try:
|
||||
while True:
|
||||
browser_content = "\n".join(_crawler.crawl())
|
||||
llm_command = nat_bot_chain.execute(_crawler.page.url, browser_content)
|
||||
if not quiet:
|
||||
print("URL: " + _crawler.page.url)
|
||||
print("Objective: " + objective)
|
||||
print("----------------\n" + browser_content + "\n----------------\n")
|
||||
if len(llm_command) > 0:
|
||||
print("Suggested command: " + llm_command)
|
||||
|
||||
command = input()
|
||||
if command == "r" or command == "":
|
||||
run_cmd(llm_command, _crawler)
|
||||
elif command == "g":
|
||||
url = input("URL:")
|
||||
_crawler.go_to_page(url)
|
||||
elif command == "u":
|
||||
_crawler.scroll("up")
|
||||
time.sleep(1)
|
||||
elif command == "d":
|
||||
_crawler.scroll("down")
|
||||
time.sleep(1)
|
||||
elif command == "c":
|
||||
id = input("id:")
|
||||
_crawler.click(id)
|
||||
time.sleep(1)
|
||||
elif command == "t":
|
||||
id = input("id:")
|
||||
text = input("text:")
|
||||
_crawler.type(id, text)
|
||||
time.sleep(1)
|
||||
elif command == "o":
|
||||
objective = input("Objective:")
|
||||
else:
|
||||
print(
|
||||
"(g) to visit url\n(u) scroll up\n(d) scroll down\n(c) to click"
|
||||
"\n(t) to type\n(h) to view commands again"
|
||||
"\n(r/enter) to run suggested command\n(o) change objective"
|
||||
)
|
||||
except KeyboardInterrupt:
|
||||
print("\n[!] Ctrl+C detected, exiting gracefully.")
|
||||
exit(0)
|
||||
@@ -1,16 +0,0 @@
|
||||
# Key Concepts
|
||||
|
||||
## Agents
|
||||
Agents use an LLM to determine which actions to take and in what order.
|
||||
For more detailed information on agents, and different types of agents in LangChain, see [this documentation](agents.md).
|
||||
|
||||
## Tools
|
||||
Tools are functions that agents can use to interact with the world.
|
||||
These tools can be generic utilities (e.g. search), other chains, or even other agents.
|
||||
For more detailed information on tools, and different types of tools in LangChain, see [this documentation](tools.md).
|
||||
|
||||
## ToolKits
|
||||
Toolkits are groups of tools that are best used together.
|
||||
They allow you to logically group and initialize a set of tools that share a particular resource (such as a database connection or json object).
|
||||
They can be used to construct an agent for a specific use-case.
|
||||
For more detailed information on toolkits and their use cases, see [this documentation](how_to_guides.rst#agent-toolkits) (the "Agent Toolkits" section).
|
||||
18
docs/modules/agents/toolkits.rst
Normal file
18
docs/modules/agents/toolkits.rst
Normal file
@@ -0,0 +1,18 @@
|
||||
Toolkits
|
||||
==============
|
||||
|
||||
.. note::
|
||||
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/toolkit>`_
|
||||
|
||||
|
||||
This section of documentation covers agents with toolkits - eg an agent applied to a particular use case.
|
||||
|
||||
See below for a full list of agent toolkits
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./toolkits/examples/*
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "82a4c2cc-20ea-4b20-a565-63e905dee8ff",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Python Agent\n",
|
||||
"# Python Agent\n",
|
||||
"\n",
|
||||
"This notebook showcases an agent designed to write and execute python code to answer a question."
|
||||
]
|
||||
@@ -36,7 +36,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 3,
|
||||
"id": "345bb078-4ec1-4e3a-827b-cd238c49054d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -53,7 +53,7 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"loader = TextLoader('../../state_of_the_union.txt')\n",
|
||||
"loader = TextLoader('../../../state_of_the_union.txt')\n",
|
||||
"documents = loader.load()\n",
|
||||
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
|
||||
"texts = text_splitter.split_documents(documents)\n",
|
||||
@@ -409,7 +409,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
38
docs/modules/agents/tools.rst
Normal file
38
docs/modules/agents/tools.rst
Normal file
@@ -0,0 +1,38 @@
|
||||
Tools
|
||||
=============
|
||||
|
||||
.. note::
|
||||
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/tool>`_
|
||||
|
||||
|
||||
|
||||
Tools are ways that an agent can use to interact with the outside world.
|
||||
|
||||
For an overview of what a tool is, how to use them, and a full list of examples, please see the getting started documentation
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./tools/getting_started.md
|
||||
|
||||
Next, we have some examples of customizing and generically working with tools
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./tools/custom_tools.ipynb
|
||||
./tools/multi_input_tool.ipynb
|
||||
|
||||
|
||||
In this documentation we cover generic tooling functionality (eg how to create your own)
|
||||
as well as examples of tools and how to use them.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./tools/examples/*
|
||||
|
||||
120
docs/modules/agents/tools/examples/chatgpt_plugins.ipynb
Normal file
120
docs/modules/agents/tools/examples/chatgpt_plugins.ipynb
Normal file
@@ -0,0 +1,120 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3f34700b",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# ChatGPT Plugins\n",
|
||||
"\n",
|
||||
"This example shows how to use ChatGPT Plugins within LangChain abstractions.\n",
|
||||
"\n",
|
||||
"Note 1: This currently only works for plugins with no auth.\n",
|
||||
"\n",
|
||||
"Note 2: There are almost certainly other ways to do this, this is just a first pass. If you have better ideas, please open a PR!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "d41405b5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.agents import load_tools, initialize_agent\n",
|
||||
"from langchain.tools import AIPluginTool"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "d9e61df5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tool = AIPluginTool.from_plugin_url(\"https://www.klarna.com/.well-known/ai-plugin.json\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "edc0ea0e",
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mI need to check the Klarna Shopping API to see if it has information on available t shirts.\n",
|
||||
"Action: KlarnaProducts\n",
|
||||
"Action Input: None\u001b[0m\n",
|
||||
"Observation: \u001b[33;1m\u001b[1;3mUsage Guide: Use the Klarna plugin to get relevant product suggestions for any shopping or researching purpose. The query to be sent should not include stopwords like articles, prepositions and determinants. The api works best when searching for words that are related to products, like their name, brand, model or category. Links will always be returned and should be shown to the user.\n",
|
||||
"\n",
|
||||
"OpenAPI Spec: {'openapi': '3.0.1', 'info': {'version': 'v0', 'title': 'Open AI Klarna product Api'}, 'servers': [{'url': 'https://www.klarna.com/us/shopping'}], 'tags': [{'name': 'open-ai-product-endpoint', 'description': 'Open AI Product Endpoint. Query for products.'}], 'paths': {'/public/openai/v0/products': {'get': {'tags': ['open-ai-product-endpoint'], 'summary': 'API for fetching Klarna product information', 'operationId': 'productsUsingGET', 'parameters': [{'name': 'q', 'in': 'query', 'description': 'query, must be between 2 and 100 characters', 'required': True, 'schema': {'type': 'string'}}, {'name': 'size', 'in': 'query', 'description': 'number of products returned', 'required': False, 'schema': {'type': 'integer'}}, {'name': 'budget', 'in': 'query', 'description': 'maximum price of the matching product in local currency, filters results', 'required': False, 'schema': {'type': 'integer'}}], 'responses': {'200': {'description': 'Products found', 'content': {'application/json': {'schema': {'$ref': '#/components/schemas/ProductResponse'}}}}, '503': {'description': 'one or more services are unavailable'}}, 'deprecated': False}}}, 'components': {'schemas': {'Product': {'type': 'object', 'properties': {'attributes': {'type': 'array', 'items': {'type': 'string'}}, 'name': {'type': 'string'}, 'price': {'type': 'string'}, 'url': {'type': 'string'}}, 'title': 'Product'}, 'ProductResponse': {'type': 'object', 'properties': {'products': {'type': 'array', 'items': {'$ref': '#/components/schemas/Product'}}}, 'title': 'ProductResponse'}}}}\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to use the Klarna Shopping API to search for t shirts.\n",
|
||||
"Action: requests_get\n",
|
||||
"Action Input: https://www.klarna.com/us/shopping/public/openai/v0/products?q=t%20shirts\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3m{\"products\":[{\"name\":\"Lacoste Men's Pack of Plain T-Shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202043025/Clothing/Lacoste-Men-s-Pack-of-Plain-T-Shirts/?utm_source=openai\",\"price\":\"$26.60\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Black\"]},{\"name\":\"Hanes Men's Ultimate 6pk. Crewneck T-Shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201808270/Clothing/Hanes-Men-s-Ultimate-6pk.-Crewneck-T-Shirts/?utm_source=openai\",\"price\":\"$13.82\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White\"]},{\"name\":\"Nike Boy's Jordan Stretch T-shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3201863202/Children-s-Clothing/Nike-Boy-s-Jordan-Stretch-T-shirts/?utm_source=openai\",\"price\":\"$14.99\",\"attributes\":[\"Material:Cotton\",\"Color:White,Green\",\"Model:Boy\",\"Size (Small-Large):S,XL,L,M\"]},{\"name\":\"Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203028500/Clothing/Polo-Classic-Fit-Cotton-V-Neck-T-Shirts-3-Pack/?utm_source=openai\",\"price\":\"$29.95\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Blue,Black\"]},{\"name\":\"adidas Comfort T-shirts Men's 3-pack\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202640533/Clothing/adidas-Comfort-T-shirts-Men-s-3-pack/?utm_source=openai\",\"price\":\"$14.99\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Black\",\"Neckline:Round\"]}]}\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThe available t shirts in Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\n",
|
||||
"Final Answer: The available t shirts in Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"The available t shirts in Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm = ChatOpenAI(temperature=0,)\n",
|
||||
"tools = load_tools([\"requests\"] )\n",
|
||||
"tools += [tool]\n",
|
||||
"\n",
|
||||
"agent_chain = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)\n",
|
||||
"agent_chain.run(\"what t shirts are available in klarna?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e49318a4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
132
docs/modules/agents/tools/examples/human_tools.ipynb
Normal file
132
docs/modules/agents/tools/examples/human_tools.ipynb
Normal file
@@ -0,0 +1,132 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Human as a tool\n",
|
||||
"\n",
|
||||
"Human are AGI so they can certainly be used as a tool to help out AI agent \n",
|
||||
"when it is confused."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import sys\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.agents import load_tools, initialize_agent\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(temperature=0.0)\n",
|
||||
"math_llm = OpenAI(temperature=0.0)\n",
|
||||
"tools = load_tools(\n",
|
||||
" [\"human\", \"llm-math\"], \n",
|
||||
" llm=math_llm,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"agent_chain = initialize_agent(\n",
|
||||
" tools,\n",
|
||||
" llm,\n",
|
||||
" agent=\"zero-shot-react-description\",\n",
|
||||
" verbose=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In the above code you can see the tool takes input directly from command line.\n",
|
||||
"You can customize `prompt_func` and `input_func` according to your need."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mI don't know Eric Zhu, so I should ask a human for guidance.\n",
|
||||
"Action: Human\n",
|
||||
"Action Input: \"Do you know when Eric Zhu's birthday is?\"\u001b[0m\n",
|
||||
"\n",
|
||||
"Do you know when Eric Zhu's birthday is?\n",
|
||||
"last week\n",
|
||||
"\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mlast week\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThat's not very helpful. I should ask for more information.\n",
|
||||
"Action: Human\n",
|
||||
"Action Input: \"Do you know the specific date of Eric Zhu's birthday?\"\u001b[0m\n",
|
||||
"\n",
|
||||
"Do you know the specific date of Eric Zhu's birthday?\n",
|
||||
"august 1st\n",
|
||||
"\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3maugust 1st\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mNow that I have the date, I can check if it's a leap year or not.\n",
|
||||
"Action: Calculator\n",
|
||||
"Action Input: \"Is 2021 a leap year?\"\u001b[0m\n",
|
||||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: False\n",
|
||||
"\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI have all the information I need to answer the original question.\n",
|
||||
"Final Answer: Eric Zhu's birthday is on August 1st and it is not a leap year in 2021.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Eric Zhu's birthday is on August 1st and it is not a leap year in 2021.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"\n",
|
||||
"agent_chain.run(\"What is Eric Zhu's birthday?\")\n",
|
||||
"# Answer with \"last week\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -11,10 +11,10 @@
|
||||
"\n",
|
||||
"From https://github.com/SidU/teams-langchain-js/wiki/Connecting-IFTTT-Services.\n",
|
||||
"\n",
|
||||
"# Creating a webhook\n",
|
||||
"## Creating a webhook\n",
|
||||
"- Go to https://ifttt.com/create\n",
|
||||
"\n",
|
||||
"# Configuring the \"If This\"\n",
|
||||
"## Configuring the \"If This\"\n",
|
||||
"- Click on the \"If This\" button in the IFTTT interface.\n",
|
||||
"- Search for \"Webhooks\" in the search bar.\n",
|
||||
"- Choose the first option for \"Receive a web request with a JSON payload.\"\n",
|
||||
@@ -24,7 +24,7 @@
|
||||
"Event Name.\n",
|
||||
"- Click the \"Create Trigger\" button to save your settings and create your webhook.\n",
|
||||
"\n",
|
||||
"# Configuring the \"Then That\"\n",
|
||||
"## Configuring the \"Then That\"\n",
|
||||
"- Tap on the \"Then That\" button in the IFTTT interface.\n",
|
||||
"- Search for the service you want to connect, such as Spotify.\n",
|
||||
"- Choose an action from the service, such as \"Add track to a playlist\".\n",
|
||||
@@ -38,7 +38,7 @@
|
||||
"- Congratulations! You have successfully connected the Webhook to the desired\n",
|
||||
"service, and you're ready to start receiving data and triggering actions 🎉\n",
|
||||
"\n",
|
||||
"# Finishing up\n",
|
||||
"## Finishing up\n",
|
||||
"- To get your webhook URL go to https://ifttt.com/maker_webhooks/settings\n",
|
||||
"- Copy the IFTTT key value from there. The URL is of the form\n",
|
||||
"https://maker.ifttt.com/use/YOUR_IFTTT_KEY. Grab the YOUR_IFTTT_KEY value.\n"
|
||||
128
docs/modules/agents/tools/examples/openweathermap.ipynb
Normal file
128
docs/modules/agents/tools/examples/openweathermap.ipynb
Normal file
@@ -0,0 +1,128 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "245a954a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# OpenWeatherMap API\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use the OpenWeatherMap component to fetch weather information.\n",
|
||||
"\n",
|
||||
"First, you need to sign up for an OpenWeatherMap API key:\n",
|
||||
"\n",
|
||||
"1. Go to OpenWeatherMap and sign up for an API key [here](https://openweathermap.org/api/)\n",
|
||||
"2. pip install pyowm\n",
|
||||
"\n",
|
||||
"Then we will need to set some environment variables:\n",
|
||||
"1. Save your API KEY into OPENWEATHERMAP_API_KEY env variable"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "961b3689",
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "shellscript"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install pyowm"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 35,
|
||||
"id": "34bb5968",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"os.environ[\"OPENWEATHERMAP_API_KEY\"] = \"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 36,
|
||||
"id": "ac4910f8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.utilities import OpenWeatherMapAPIWrapper"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 37,
|
||||
"id": "84b8f773",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"weather = OpenWeatherMapAPIWrapper()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 38,
|
||||
"id": "9651f324-e74a-4f08-a28a-89db029f66f8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"weather_data = weather.run(\"London,GB\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 39,
|
||||
"id": "028f4cba",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"In London,GB, the current weather is as follows:\n",
|
||||
"Detailed status: overcast clouds\n",
|
||||
"Wind speed: 4.63 m/s, direction: 150°\n",
|
||||
"Humidity: 67%\n",
|
||||
"Temperature: \n",
|
||||
" - Current: 5.35°C\n",
|
||||
" - High: 6.26°C\n",
|
||||
" - Low: 3.49°C\n",
|
||||
" - Feels like: 1.95°C\n",
|
||||
"Rain: {}\n",
|
||||
"Heat index: None\n",
|
||||
"Cloud cover: 100%\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(weather_data)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -73,7 +73,7 @@
|
||||
"jukit_cell_id": "OHyurqUPbS"
|
||||
},
|
||||
"source": [
|
||||
"# Custom Parameters\n",
|
||||
"## Custom Parameters\n",
|
||||
"\n",
|
||||
"SearxNG supports up to [139 search engines](https://docs.searxng.org/admin/engines/configured_engines.html#configured-engines). You can also customize the Searx wrapper with arbitrary named parameters that will be passed to the Searx search API . In the below example we will making a more interesting use of custom search parameters from searx search api."
|
||||
]
|
||||
@@ -104,7 +104,7 @@
|
||||
"metadata": {
|
||||
"jukit_cell_id": "3FyQ6yHI8K",
|
||||
"tags": [
|
||||
"scroll-output"
|
||||
"scroll-output"
|
||||
]
|
||||
},
|
||||
"outputs": [
|
||||
@@ -161,7 +161,7 @@
|
||||
"jukit_cell_id": "d0x164ssV1"
|
||||
},
|
||||
"source": [
|
||||
"# Obtaining results with metadata"
|
||||
"## Obtaining results with metadata"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -192,7 +192,7 @@
|
||||
"metadata": {
|
||||
"jukit_cell_id": "r7qUtvKNOh",
|
||||
"tags": [
|
||||
"scroll-output"
|
||||
"scroll-output"
|
||||
]
|
||||
},
|
||||
"outputs": [
|
||||
@@ -263,7 +263,7 @@
|
||||
"metadata": {
|
||||
"jukit_cell_id": "JyNgoFm0vo",
|
||||
"tags": [
|
||||
"scroll-output"
|
||||
"scroll-output"
|
||||
]
|
||||
},
|
||||
"outputs": [
|
||||
@@ -444,7 +444,7 @@
|
||||
"metadata": {
|
||||
"jukit_cell_id": "5NrlredKxM",
|
||||
"tags": [
|
||||
"scroll-output"
|
||||
"scroll-output"
|
||||
]
|
||||
},
|
||||
"outputs": [
|
||||
@@ -600,7 +600,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.11"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -5,9 +5,9 @@
|
||||
"id": "16763ed3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Zapier Natural Language Actions API\n",
|
||||
"# Zapier Natural Language Actions API\n",
|
||||
"\\\n",
|
||||
"Full docs here: https://nla.zapier.com/api/v1/dynamic/docs\n",
|
||||
"Full docs here: https://nla.zapier.com/api/v1/docs\n",
|
||||
"\n",
|
||||
"**Zapier Natural Language Actions** gives you access to the 5k+ apps, 20k+ actions on Zapier's platform through a natural language API interface.\n",
|
||||
"\n",
|
||||
@@ -21,7 +21,7 @@
|
||||
"\n",
|
||||
"2. User-facing (Oauth): for production scenarios where you are deploying an end-user facing application and LangChain needs access to end-user's exposed actions and connected accounts on Zapier.com\n",
|
||||
"\n",
|
||||
"This quick start will focus on the server-side use case for brevity. Review [full docs](https://nla.zapier.com/api/v1/dynamic/docs) or reach out to nla@zapier.com for user-facing oauth developer support.\n",
|
||||
"This quick start will focus on the server-side use case for brevity. Review [full docs](https://nla.zapier.com/api/v1/docs) or reach out to nla@zapier.com for user-facing oauth developer support.\n",
|
||||
"\n",
|
||||
"This example goes over how to use the Zapier integration with a `SimpleSequentialChain`, then an `Agent`.\n",
|
||||
"In code, below:"
|
||||
@@ -318,7 +318,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -1,4 +1,4 @@
|
||||
# Tools
|
||||
# Getting Started
|
||||
|
||||
Tools are functions that agents can use to interact with the world.
|
||||
These tools can be generic utilities (e.g. search), other chains, or even other agents.
|
||||
@@ -118,7 +118,7 @@ Below is a list of all supported tools and relevant information:
|
||||
- Notes: Uses the Google Custom Search API
|
||||
- Requires LLM: No
|
||||
- Extra Parameters: `google_api_key`, `google_cse_id`
|
||||
- For more information on this, see [this page](../../ecosystem/google_search.md)
|
||||
- For more information on this, see [this page](../../../ecosystem/google_search.md)
|
||||
|
||||
**searx-search**
|
||||
|
||||
@@ -135,7 +135,7 @@ Below is a list of all supported tools and relevant information:
|
||||
- Notes: Calls the [serper.dev](https://serper.dev) Google Search API and then parses results.
|
||||
- Requires LLM: No
|
||||
- Extra Parameters: `serper_api_key`
|
||||
- For more information on this, see [this page](../../ecosystem/google_serper.md)
|
||||
- For more information on this, see [this page](../../../ecosystem/google_serper.md)
|
||||
|
||||
**wikipedia**
|
||||
|
||||
@@ -145,3 +145,18 @@ Below is a list of all supported tools and relevant information:
|
||||
- Requires LLM: No
|
||||
- Extra Parameters: `top_k_results`
|
||||
|
||||
**podcast-api**
|
||||
|
||||
- Tool Name: Podcast API
|
||||
- Tool Description: Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.
|
||||
- Notes: A natural language connection to the Listen Notes Podcast API (`https://www.PodcastAPI.com`), specifically the `/search/` endpoint.
|
||||
- Requires LLM: Yes
|
||||
- Extra Parameters: `listen_api_key` (your api key to access this endpoint)
|
||||
|
||||
**openweathermap-api**
|
||||
|
||||
- Tool Name: OpenWeatherMap
|
||||
- Tool Description: A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. 'London,GB').
|
||||
- Notes: A connection to the OpenWeatherMap API (https://api.openweathermap.org), specifically the `/data/2.5/weather` endpoint.
|
||||
- Requires LLM: No
|
||||
- Extra Parameters: `openweathermap_api_key` (your API key to access this endpoint)
|
||||
@@ -1,6 +1,10 @@
|
||||
Chains
|
||||
==========================
|
||||
|
||||
.. note::
|
||||
`Conceptual Guide <https://docs.langchain.com/docs/components/chains>`_
|
||||
|
||||
|
||||
Using an LLM in isolation is fine for some simple applications,
|
||||
but many more complex ones require chaining LLMs - either with each other or with other experts.
|
||||
LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of use.
|
||||
@@ -9,8 +13,6 @@ The following sections of documentation are provided:
|
||||
|
||||
- `Getting Started <./chains/getting_started.html>`_: A getting started guide for chains, to get you up and running quickly.
|
||||
|
||||
- `Key Concepts <./chains/key_concepts.html>`_: A conceptual guide going over the various concepts related to chains.
|
||||
|
||||
- `How-To Guides <./chains/how_to_guides.html>`_: A collection of how-to guides. These highlight how to use various types of chains.
|
||||
|
||||
- `Reference <../reference/modules/chains.html>`_: API reference documentation for all Chain classes.
|
||||
@@ -25,5 +27,4 @@ The following sections of documentation are provided:
|
||||
|
||||
./chains/getting_started.ipynb
|
||||
./chains/how_to_guides.rst
|
||||
./chains/key_concepts.rst
|
||||
Reference<../reference/modules/chains.rst>
|
||||
|
||||
@@ -149,6 +149,33 @@
|
||||
"chain.run(\"Search for 'Avatar'\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Listen API Example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.chains.api import podcast_docs\n",
|
||||
"from langchain.chains import APIChain\n",
|
||||
"\n",
|
||||
"# Get api key here: https://www.listennotes.com/api/pricing/\n",
|
||||
"listen_api_key = 'xxx'\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"headers = {\"X-ListenAPI-Key\": listen_api_key}\n",
|
||||
"chain = APIChain.from_llm_and_api_docs(llm, podcast_docs.PODCAST_DOCS, headers=headers, verbose=True)\n",
|
||||
"chain.run(\"Search for 'silicon valley bank' podcast episodes, audio length is more than 30 minutes, return only 1 results\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -173,7 +200,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -532,7 +532,7 @@
|
||||
"id": "5fc6f507",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note how our custom table definition and sample rows for `Track` overrides the `sample_rows_in_table_info` parameter. Tables that are not overriden by `custom_table_info`, in this example `Playlist`, will have their table info gathered automatically as usual."
|
||||
"Note how our custom table definition and sample rows for `Track` overrides the `sample_rows_in_table_info` parameter. Tables that are not overridden by `custom_table_info`, in this example `Playlist`, will have their table info gathered automatically as usual."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -34,10 +34,10 @@
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001B[1m> Entering new LLMMathChain chain...\u001B[0m\n",
|
||||
"whats 2 raised to .12\u001B[32;1m\u001B[1;3m\n",
|
||||
"Answer: 1.0791812460476249\u001B[0m\n",
|
||||
"\u001B[1m> Finished chain.\u001B[0m\n"
|
||||
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
|
||||
"whats 2 raised to .12\u001b[32;1m\u001b[1;3m\n",
|
||||
"Answer: 1.0791812460476249\u001b[0m\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -31,7 +31,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with open('../../state_of_the_union.txt') as f:\n",
|
||||
"with open(\"../../state_of_the_union.txt\") as f:\n",
|
||||
" state_of_the_union = f.read()"
|
||||
]
|
||||
},
|
||||
@@ -122,7 +122,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
Generic Chains
|
||||
--------------
|
||||
|
||||
A chain is made up of links, which can be either primitives or other chains.
|
||||
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains.
|
||||
The examples here are all generic end-to-end chains that are meant to be used to construct other chains rather than serving a specific purpose.
|
||||
|
||||
**LLMChain**
|
||||
|
||||
- **Links Used**: PromptTemplate, LLM
|
||||
- **Notes**: This chain is the simplest chain, and is widely used by almost every other chain. This chain takes arbitrary user input, creates a prompt with it from the PromptTemplate, passes that to the LLM, and then returns the output of the LLM as the final output.
|
||||
- `Example Notebook <./generic/llm_chain.html>`_
|
||||
|
||||
**Transformation Chain**
|
||||
|
||||
- **Links Used**: TransformationChain
|
||||
- **Notes**: This notebook shows how to use the Transformation Chain, which takes an arbitrary python function and applies it to inputs/outputs of other chains.
|
||||
- `Example Notebook <./generic/transformation.html>`_
|
||||
|
||||
**Sequential Chain**
|
||||
|
||||
- **Links Used**: Sequential
|
||||
- **Notes**: This notebook shows how to combine calling multiple other chains in sequence.
|
||||
- `Example Notebook <./generic/sequential_chains.html>`_
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
:caption: Generic Chains
|
||||
:name: generic
|
||||
:hidden:
|
||||
|
||||
./generic/*
|
||||
@@ -2,23 +2,37 @@ How-To Guides
|
||||
=============
|
||||
|
||||
A chain is made up of links, which can be either primitives or other chains.
|
||||
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains.
|
||||
The examples here are all end-to-end chains for specific applications.
|
||||
They are broken up into three categories:
|
||||
Primitives can be either `prompts <../prompts.html>`_, `models <../models.html>`_, arbitrary functions, or other chains.
|
||||
The examples here are broken up into three sections:
|
||||
|
||||
1. `Generic Chains <./generic_how_to.html>`_: Generic chains, that are meant to help build other chains rather than serve a particular purpose.
|
||||
2. `Utility Chains <./utility_how_to.html>`_: Chains consisting of an LLMChain interacting with a specific util.
|
||||
3. `Asynchronous <./async_chain.html>`_: Covering asynchronous functionality.
|
||||
**Generic Functionality**
|
||||
|
||||
Covers both generic chains (that are useful in a wide variety of applications) as well as generic functionality related to those chains.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
:hidden:
|
||||
|
||||
./generic_how_to.rst
|
||||
./utility_how_to.rst
|
||||
./async_chain.ipynb
|
||||
./generic/*
|
||||
|
||||
In addition to different types of chains, we also have the following how-to guides for working with chains in general:
|
||||
**Index-related Chains**
|
||||
|
||||
Chains related to working with indexes.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./index_examples/*
|
||||
|
||||
|
||||
**All other chains**
|
||||
|
||||
All other types of chains!
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:glob:
|
||||
|
||||
./examples/*
|
||||
|
||||
`Load From Hub <./generic/from_hub.html>`_: This notebook covers how to load chains from `LangChainHub <https://github.com/hwchase17/langchain-hub>`_.
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with open('../../state_of_the_union.txt') as f:\n",
|
||||
"with open(\"../../state_of_the_union.txt\") as f:\n",
|
||||
" state_of_the_union = f.read()"
|
||||
]
|
||||
},
|
||||
@@ -170,7 +170,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
@@ -5,9 +5,9 @@
|
||||
"id": "134a0785",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Chat Vector DB\n",
|
||||
"# Chat Index\n",
|
||||
"\n",
|
||||
"This notebook goes over how to set up a chain to chat with a vector database. The only difference between this chain and the [VectorDBQAChain](./vector_db_qa.ipynb) is that this allows for passing in of a chat history which can be used to allow for follow up questions."
|
||||
"This notebook goes over how to set up a chain to chat with an index. The only difference between this chain and the [RetrievalQAChain](./vector_db_qa.ipynb) is that this allows for passing in of a chat history which can be used to allow for follow up questions."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -23,7 +23,7 @@
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.text_splitter import CharacterTextSplitter\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.chains import ChatVectorDBChain"
|
||||
"from langchain.chains import ConversationalRetrievalChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -44,7 +44,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"loader = TextLoader('../../state_of_the_union.txt')\n",
|
||||
"loader = TextLoader(\"../../state_of_the_union.txt\")\n",
|
||||
"documents = loader.load()"
|
||||
]
|
||||
},
|
||||
@@ -109,7 +109,7 @@
|
||||
"id": "3c96b118",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We now initialize the ChatVectorDBChain"
|
||||
"We now initialize the ConversationalRetrievalChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -121,7 +121,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore)"
|
||||
"qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever())"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -220,22 +220,22 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Return Source Documents\n",
|
||||
"You can also easily return source documents from the ChatVectorDBChain. This is useful for when you want to inspect what documents were returned."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "562769c6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore, return_source_documents=True)"
|
||||
"You can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "562769c6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "ea478300",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -247,17 +247,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"execution_count": 13,
|
||||
"id": "4cb75b4e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \\n\\nWe cannot let this happen. \\n\\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)"
|
||||
"Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -271,13 +271,13 @@
|
||||
"id": "4f49beab",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chat Vector DB with `search_distance`\n",
|
||||
"## ConversationalRetrievalChain with `search_distance`\n",
|
||||
"If you are using a vector store that supports filtering by search distance, you can add a threshold value parameter."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 14,
|
||||
"id": "5ed8d612",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -287,12 +287,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 15,
|
||||
"id": "6a7b3459",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore, return_source_documents=True)\n",
|
||||
"qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)\n",
|
||||
"chat_history = []\n",
|
||||
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
|
||||
"result = qa({\"question\": query, \"chat_history\": chat_history, \"vectordbkwargs\": vectordbkwargs})"
|
||||
@@ -303,25 +303,25 @@
|
||||
"id": "99b96dae",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chat Vector DB with `map_reduce`\n",
|
||||
"We can also use different types of combine document chains with the Chat Vector DB chain."
|
||||
"## ConversationalRetrievalChain with `map_reduce`\n",
|
||||
"We can also use different types of combine document chains with the ConversationalRetrievalChain chain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"execution_count": 16,
|
||||
"id": "e53a9d66",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.chains.question_answering import load_qa_chain\n",
|
||||
"from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT"
|
||||
"from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 19,
|
||||
"id": "bf205e35",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -330,8 +330,8 @@
|
||||
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
|
||||
"doc_chain = load_qa_chain(llm, chain_type=\"map_reduce\")\n",
|
||||
"\n",
|
||||
"chain = ChatVectorDBChain(\n",
|
||||
" vectorstore=vectorstore,\n",
|
||||
"chain = ConversationalRetrievalChain(\n",
|
||||
" retriever=vectorstore.as_retriever(),\n",
|
||||
" question_generator=question_generator,\n",
|
||||
" combine_docs_chain=doc_chain,\n",
|
||||
")"
|
||||
@@ -339,7 +339,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 20,
|
||||
"id": "78155887",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -351,7 +351,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 21,
|
||||
"id": "e54b5fa2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -361,7 +361,7 @@
|
||||
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -375,14 +375,14 @@
|
||||
"id": "a2fe6b14",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chat Vector DB with Question Answering with sources\n",
|
||||
"## ConversationalRetrievalChain with Question Answering with sources\n",
|
||||
"\n",
|
||||
"You can also use this chain with the question answering with sources chain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 22,
|
||||
"id": "d1058fd2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -392,7 +392,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 23,
|
||||
"id": "a6594482",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -401,8 +401,8 @@
|
||||
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
|
||||
"doc_chain = load_qa_with_sources_chain(llm, chain_type=\"map_reduce\")\n",
|
||||
"\n",
|
||||
"chain = ChatVectorDBChain(\n",
|
||||
" vectorstore=vectorstore,\n",
|
||||
"chain = ConversationalRetrievalChain(\n",
|
||||
" retriever=vectorstore.as_retriever(),\n",
|
||||
" question_generator=question_generator,\n",
|
||||
" combine_docs_chain=doc_chain,\n",
|
||||
")"
|
||||
@@ -410,7 +410,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": 24,
|
||||
"id": "e2badd21",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -422,7 +422,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"execution_count": 25,
|
||||
"id": "edb31fe5",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -432,7 +432,7 @@
|
||||
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\nSOURCES: ../../state_of_the_union.txt\""
|
||||
]
|
||||
},
|
||||
"execution_count": 16,
|
||||
"execution_count": 25,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -446,14 +446,14 @@
|
||||
"id": "2324cdc6-98bf-4708-b8cd-02a98b1e5b67",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chat Vector DB with streaming to `stdout`\n",
|
||||
"## ConversationalRetrievalChain with streaming to `stdout`\n",
|
||||
"\n",
|
||||
"Output from the chain will be streamed to `stdout` token by token in this example."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 26,
|
||||
"id": "2efacec3-2690-4b05-8de3-a32fd2ac3911",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -463,7 +463,7 @@
|
||||
"from langchain.chains.llm import LLMChain\n",
|
||||
"from langchain.callbacks.base import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT\n",
|
||||
"from langchain.chains.chat_index.prompts import CONDENSE_QUESTION_PROMPT, QA_PROMPT\n",
|
||||
"from langchain.chains.question_answering import load_qa_chain\n",
|
||||
"\n",
|
||||
"# Construct a ChatVectorDBChain with a streaming llm for combine docs\n",
|
||||
@@ -474,12 +474,13 @@
|
||||
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
|
||||
"doc_chain = load_qa_chain(streaming_llm, chain_type=\"stuff\", prompt=QA_PROMPT)\n",
|
||||
"\n",
|
||||
"qa = ChatVectorDBChain(vectorstore=vectorstore, combine_docs_chain=doc_chain, question_generator=question_generator)"
|
||||
"qa = ConversationalRetrievalChain(\n",
|
||||
" retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 27,
|
||||
"id": "fd6d43f4-7428-44a4-81bc-26fe88a98762",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -501,7 +502,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 28,
|
||||
"id": "5ab38978-f3e8-4fa7-808c-c79dec48379a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -532,7 +533,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 29,
|
||||
"id": "a7ba9d8c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -542,12 +543,12 @@
|
||||
" for human, ai in inputs:\n",
|
||||
" res.append(f\"Human:{human}\\nAI:{ai}\")\n",
|
||||
" return \"\\n\".join(res)\n",
|
||||
"qa = ChatVectorDBChain.from_llm(OpenAI(temperature=0), vectorstore, get_chat_history=get_chat_history)"
|
||||
"qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0), vectorstore, get_chat_history=get_chat_history)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 30,
|
||||
"id": "a3e33c0d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -559,7 +560,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 31,
|
||||
"id": "936dc62f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -569,7 +570,7 @@
|
||||
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"execution_count": 31,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -105,7 +105,6 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "1da90437",
|
||||
"metadata": {},
|
||||
@@ -169,7 +168,7 @@
|
||||
"from langchain.text_splitter import CharacterTextSplitter\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"\n",
|
||||
"with open('../../state_of_the_union.txt') as f:\n",
|
||||
"with open(\"../../state_of_the_union.txt\") as f:\n",
|
||||
" state_of_the_union = f.read()\n",
|
||||
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
|
||||
"texts = text_splitter.split_text(state_of_the_union)"
|
||||
@@ -236,7 +235,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -250,7 +249,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.12 (main, Mar 26 2022, 15:51:15) \n[Clang 13.1.6 (clang-1316.0.21.2)]"
|
||||
"version": "3.9.1"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
@@ -42,7 +42,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with open('../../state_of_the_union.txt') as f:\n",
|
||||
"with open(\"../../state_of_the_union.txt\") as f:\n",
|
||||
" state_of_the_union = f.read()\n",
|
||||
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
|
||||
"texts = text_splitter.split_text(state_of_the_union)\n",
|
||||
@@ -61,7 +61,7 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"loader = TextLoader('../../state_of_the_union.txt')\n",
|
||||
"loader = TextLoader(\"../../state_of_the_union.txt\")\n",
|
||||
"docsearch = index_creator.from_loaders([loader])"
|
||||
]
|
||||
},
|
||||
@@ -7,7 +7,7 @@
|
||||
"source": [
|
||||
"# Summarization\n",
|
||||
"\n",
|
||||
"This notebook walks through how to use LangChain for summarization over a list of documents. It covers three different chain types: `stuff`, `map_reduce`, and `refine`. For a more in depth explanation of what these chain types are, see [here](../combine_docs.md)."
|
||||
"This notebook walks through how to use LangChain for summarization over a list of documents. It covers three different chain types: `stuff`, `map_reduce`, and `refine`. For a more in depth explanation of what these chain types are, see [here](https://docs.langchain.com/docs/components/chains/index_related_chains)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -43,7 +43,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with open('../../state_of_the_union.txt') as f:\n",
|
||||
"with open(\"../../state_of_the_union.txt\") as f:\n",
|
||||
" state_of_the_union = f.read()\n",
|
||||
"texts = text_splitter.split_text(state_of_the_union)"
|
||||
]
|
||||
@@ -5,9 +5,9 @@
|
||||
"id": "07c1e3b9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Vector DB Question/Answering\n",
|
||||
"# Retrieval Question/Answering\n",
|
||||
"\n",
|
||||
"This example showcases question answering over a vector database."
|
||||
"This example showcases question answering over an index."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -20,7 +20,8 @@
|
||||
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.text_splitter import CharacterTextSplitter\n",
|
||||
"from langchain import OpenAI, VectorDBQA"
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.chains import RetrievalQA"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -40,7 +41,7 @@
|
||||
],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TextLoader\n",
|
||||
"loader = TextLoader('../../state_of_the_union.txt')\n",
|
||||
"loader = TextLoader(\"../../state_of_the_union.txt\")\n",
|
||||
"documents = loader.load()\n",
|
||||
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
|
||||
"texts = text_splitter.split_documents(documents)\n",
|
||||
@@ -56,7 +57,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", vectorstore=docsearch)"
|
||||
"qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", retriever=docsearch.as_retriever())"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -68,7 +69,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
|
||||
"\" The president said that she is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support, from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
@@ -87,7 +88,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Chain Type\n",
|
||||
"You can easily specify different chain types to load and use in the VectorDBQA chain. For a more detailed walkthrough of these types, please see [this notebook](question_answering.ipynb).\n",
|
||||
"You can easily specify different chain types to load and use in the RetrievalQA chain. For a more detailed walkthrough of these types, please see [this notebook](question_answering.ipynb).\n",
|
||||
"\n",
|
||||
"There are two ways to load different chain types. First, you can specify the chain type argument in the `from_chain_type` method. This allows you to pass in the name of the chain type you want to use. For example, in the below we change the chain type to `map_reduce`."
|
||||
]
|
||||
@@ -99,7 +100,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type=\"map_reduce\", vectorstore=docsearch)"
|
||||
"qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type=\"map_reduce\", retriever=docsearch.as_retriever())"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -111,7 +112,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, from a family of public school educators and police officers, a consensus builder, and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
|
||||
"\" The president said that Judge Ketanji Brown Jackson is one of our nation's top legal minds, a former top litigator in private practice and a former federal public defender, from a family of public school educators and police officers, a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
@@ -129,24 +130,24 @@
|
||||
"id": "60368f38",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in [this notebook](question_answering.ipynb)) and then pass that directly to the the VectorDBQA chain with the `combine_documents_chain` parameter. For example:"
|
||||
"The above way allows you to really simply change the chain_type, but it does provide a ton of flexibility over parameters to that chain type. If you want to control those parameters, you can load the chain directly (as you did in [this notebook](question_answering.ipynb)) and then pass that directly to the the RetrievalQA chain with the `combine_documents_chain` parameter. For example:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": 9,
|
||||
"id": "7b403f0d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains.question_answering import load_qa_chain\n",
|
||||
"qa_chain = load_qa_chain(OpenAI(temperature=0), chain_type=\"stuff\")\n",
|
||||
"qa = VectorDBQA(combine_documents_chain=qa_chain, vectorstore=docsearch)"
|
||||
"qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=docsearch.as_retriever())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"execution_count": 10,
|
||||
"id": "9e04a9ac",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -156,7 +157,7 @@
|
||||
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and from a family of public school educators and police officers. He also said that she is a consensus builder and has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -177,7 +178,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 11,
|
||||
"id": "a45232a2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -196,28 +197,28 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 13,
|
||||
"id": "9b5c8d1d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain_type_kwargs = {\"prompt\": PROMPT}\n",
|
||||
"qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", vectorstore=docsearch, chain_type_kwargs=chain_type_kwargs)"
|
||||
"qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", retriever=docsearch.as_retriever(), chain_type_kwargs=chain_type_kwargs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 14,
|
||||
"id": "26ee7671",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\" Il Presidente ha detto che Ketanji Brown Jackson è uno dei pensatori legali più importanti del nostro Paese, che continuerà l'eccellente eredità di giustizia Breyer. È un ex principale litigante in pratica privata, un ex difensore federale pubblico e appartiene a una famiglia di insegnanti e poliziotti delle scuole pubbliche. È un costruttore di consenso che ha ricevuto un ampio supporto da parte di Fraternal Order of Police e giudici designati da democratici e repubblicani.\""
|
||||
"\" Il presidente ha detto che Ketanji Brown Jackson è una delle menti legali più importanti del paese, che continuerà l'eccellenza di Justice Breyer e che ha ricevuto un ampio sostegno, da Fraternal Order of Police a ex giudici nominati da democratici e repubblicani.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -238,17 +239,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 15,
|
||||
"id": "af093aba",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", vectorstore=docsearch, return_source_documents=True)"
|
||||
"qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type=\"stuff\", retriever=docsearch.as_retriever(), return_source_documents=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 16,
|
||||
"id": "eac11321",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -259,17 +260,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 17,
|
||||
"id": "7d75945a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\" The president said that Ketanji Brown Jackson is one of our nation's top legal minds and that she will continue Justice Breyer's legacy of excellence.\""
|
||||
"\" The president said that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice and a former federal public defender from a family of public school educators and police officers, and that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"execution_count": 17,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -280,20 +281,20 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 18,
|
||||
"id": "35b4f31f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Document(page_content='In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections. \\n\\nWe cannot let this happen. \\n\\nTonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={}, lookup_index=0),\n",
|
||||
" Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', lookup_str='', metadata={}, lookup_index=0),\n",
|
||||
" Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \\n\\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \\n\\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \\n\\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \\n\\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \\n\\nFirst, beat the opioid epidemic.', lookup_str='', metadata={}, lookup_index=0),\n",
|
||||
" Document(page_content='As I’ve told Xi Jinping, it is never a good bet to bet against the American people. \\n\\nWe’ll create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \\n\\nAnd we’ll do it all to withstand the devastating effects of the climate crisis and promote environmental justice. \\n\\nWe’ll build a national network of 500,000 electric vehicle charging stations, begin to replace poisonous lead pipes—so every child—and every American—has clean water to drink at home and at school, provide affordable high-speed internet for every American—urban, suburban, rural, and tribal communities. \\n\\n4,000 projects have already been announced. \\n\\nAnd tonight, I’m announcing that this year we will start fixing over 65,000 miles of highway and 1,500 bridges in disrepair. \\n\\nWhen we use taxpayer dollars to rebuild America – we are going to Buy American: buy American products to support American jobs.', lookup_str='', metadata={}, lookup_index=0)]"
|
||||
"[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),\n",
|
||||
" Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \\n\\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \\n\\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \\n\\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \\n\\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \\n\\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),\n",
|
||||
" Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \\n\\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \\n\\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \\n\\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \\n\\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \\n\\nFirst, beat the opioid epidemic.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0),\n",
|
||||
" Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \\n\\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \\n\\nThat ends on my watch. \\n\\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \\n\\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \\n\\nLet’s pass the Paycheck Fairness Act and paid leave. \\n\\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \\n\\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', lookup_str='', metadata={'source': '../../state_of_the_union.txt'}, lookup_index=0)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user