Compare commits

..

2 Commits

Author SHA1 Message Date
Harrison Chase
e61e37a40a Update langchain/output_parsers/base.py
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-03-16 00:26:45 -07:00
Harrison Chase
f267f59186 wip guardrails 2023-03-14 21:58:29 -07:00
567 changed files with 9162 additions and 24459 deletions

View File

@@ -1,2 +0,0 @@
.venv
.github

View File

@@ -73,8 +73,6 @@ poetry install -E all
This will install all requirements for running the package, examples, linting, formatting, tests, and coverage. Note the `-E all` flag will install all optional dependencies necessary for integration testing.
❗Note: If you're running Poetry 1.4.1 and receive a `WheelFileValidationError` for `debugpy` during installation, you can try either downgrading to Poetry 1.4.0 or disabling "modern installation" (`poetry config installer.modern-installation false`) and re-install requirements. See [this `debugpy` issue](https://github.com/microsoft/debugpy/issues/1246) for more details.
Now, you should be able to run the common tasks in the following section.
## ✅Common Tasks

6
.gitignore vendored
View File

@@ -135,9 +135,3 @@ dmypy.json
# macOS display setting files
.DS_Store
# Wandb directory
wandb/
# asdf tool versions
.tool-versions

View File

@@ -1,39 +0,0 @@
# Use the Python base image
FROM python:3.11.2-bullseye AS builder
# Print Python version
RUN echo "Python version:" && python --version && echo ""
# Install Poetry
RUN echo "Installing Poetry..." && \
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/install-poetry.py | python -
# Add Poetry to PATH
ENV PATH="${PATH}:/root/.local/bin"
# Test if Poetry is added to PATH
RUN echo "Poetry version:" && poetry --version && echo ""
# Set working directory
WORKDIR /app
# Use a multi-stage build to install dependencies
FROM builder AS dependencies
# Copy only the dependency files for installation
COPY pyproject.toml poetry.lock poetry.toml ./
# Install Poetry dependencies (this layer will be cached as long as the dependencies don't change)
RUN poetry install --no-interaction --no-ansi
# Use a multi-stage build to run tests
FROM dependencies AS tests
# Copy the rest of the app source code (this layer will be invalidated and rebuilt whenever the source code changes)
COPY . .
# Set entrypoint to run tests
ENTRYPOINT ["poetry", "run", "pytest"]
# Set default command to run all unit tests
CMD ["tests/unit_tests"]

View File

@@ -1,7 +1,7 @@
.PHONY: all clean format lint test tests test_watch integration_tests docker_tests help
.PHONY: all clean format lint test tests test_watch integration_tests help
all: help
coverage:
poetry run pytest --cov \
--cov-config=.coveragerc \
@@ -40,10 +40,6 @@ test_watch:
integration_tests:
poetry run pytest tests/integration_tests
docker_tests:
docker build -t my-langchain-image:test .
docker run --rm my-langchain-image:test
help:
@echo '----'
@echo 'coverage - run unit tests and generate coverage report'
@@ -55,4 +51,3 @@ help:
@echo 'test - run unit tests'
@echo 'test_watch - run unit tests in watch mode'
@echo 'integration_tests - run integration tests'
@echo 'docker_tests - run unit tests in docker'

View File

@@ -32,7 +32,7 @@ This library is aimed at assisting in the development of those types of applicat
**🤖 Agents**
- [Documentation](https://langchain.readthedocs.io/en/latest/modules/agents.html)
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/agents.html)
- End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
## 📖 Documentation

Binary file not shown.

Before

Width:  |  Height:  |  Size: 559 KiB

View File

@@ -23,7 +23,7 @@ with open("../pyproject.toml") as f:
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
copyright = "2023, Harrison Chase"
copyright = "2022, Harrison Chase"
author = "Harrison Chase"
version = data["tool"]["poetry"]["version"]

View File

@@ -37,6 +37,3 @@ A minimal example on how to run LangChain on Vercel using Flask.
## [SteamShip](https://github.com/steamship-core/steamship-langchain/)
This repository contains LangChain adapters for Steamship, enabling LangChain developers to rapidly deploy their apps on Steamship.
This includes: production ready endpoints, horizontal scaling across dependencies, persistant storage of app state, multi-tenancy support, etc.
## [Langchain-serve](https://github.com/jina-ai/langchain-serve)
This repository allows users to serve local chains and agents as RESTful, gRPC, or Websocket APIs thanks to [Jina](https://docs.jina.ai/). Deploy your chains & agents with ease and enjoy independent scaling, serverless and autoscaling APIs, as well as a Streamlit playground on Jina AI Cloud.

View File

@@ -1,292 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Aim\n",
"\n",
"Aim makes it super easy to visualize and debug LangChain executions. Aim tracks inputs and outputs of LLMs and tools, as well as actions of agents. \n",
"\n",
"With Aim, you can easily debug and examine an individual execution:\n",
"\n",
"![](https://user-images.githubusercontent.com/13848158/227784778-06b806c7-74a1-4d15-ab85-9ece09b458aa.png)\n",
"\n",
"Additionally, you have the option to compare multiple executions side by side:\n",
"\n",
"![](https://user-images.githubusercontent.com/13848158/227784994-699b24b7-e69b-48f9-9ffa-e6a6142fd719.png)\n",
"\n",
"Aim is fully open source, [learn more](https://github.com/aimhubio/aim) about Aim on GitHub.\n",
"\n",
"Let's move forward and see how to enable and configure Aim callback."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Tracking LangChain Executions with Aim</h3>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this notebook we will explore three usage scenarios. To start off, we will install the necessary packages and import certain modules. Subsequently, we will configure two environment variables that can be established either within the Python script or through the terminal."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mf88kuCJhbVu"
},
"outputs": [],
"source": [
"!pip install aim\n",
"!pip install langchain\n",
"!pip install openai\n",
"!pip install google-search-results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "g4eTuajwfl6L"
},
"outputs": [],
"source": [
"import os\n",
"from datetime import datetime\n",
"\n",
"from langchain.llms import OpenAI\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.callbacks import AimCallbackHandler, StdOutCallbackHandler"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Our examples use a GPT model as the LLM, and OpenAI offers an API for this purpose. You can obtain the key from the following link: https://platform.openai.com/account/api-keys .\n",
"\n",
"We will use the SerpApi to retrieve search results from Google. To acquire the SerpApi key, please go to https://serpapi.com/manage-api-key ."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "T1bSmKd6V2If"
},
"outputs": [],
"source": [
"os.environ[\"OPENAI_API_KEY\"] = \"...\"\n",
"os.environ[\"SERPAPI_API_KEY\"] = \"...\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "QenUYuBZjIzc"
},
"source": [
"The event methods of `AimCallbackHandler` accept the LangChain module or agent as input and log at least the prompts and generated results, as well as the serialized version of the LangChain module, to the designated Aim run."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KAz8weWuUeXF"
},
"outputs": [],
"source": [
"session_group = datetime.now().strftime(\"%m.%d.%Y_%H.%M.%S\")\n",
"aim_callback = AimCallbackHandler(\n",
" repo=\".\",\n",
" experiment_name=\"scenario 1: OpenAI LLM\",\n",
")\n",
"\n",
"manager = CallbackManager([StdOutCallbackHandler(), aim_callback])\n",
"llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b8WfByB4fl6N"
},
"source": [
"The `flush_tracker` function is used to record LangChain assets on Aim. By default, the session is reset rather than being terminated outright."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Scenario 1</h3> In the first scenario, we will use OpenAI LLM."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "o_VmneyIUyx8"
},
"outputs": [],
"source": [
"# scenario 1 - LLM\n",
"llm_result = llm.generate([\"Tell me a joke\", \"Tell me a poem\"] * 3)\n",
"aim_callback.flush_tracker(\n",
" langchain_asset=llm,\n",
" experiment_name=\"scenario 2: Chain with multiple SubChains on multiple generations\",\n",
")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Scenario 2</h3> Scenario two involves chaining with multiple SubChains across multiple generations."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "trxslyb1U28Y"
},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uauQk10SUzF6"
},
"outputs": [],
"source": [
"# scenario 2 - Chain\n",
"template = \"\"\"You are a playwright. Given the title of play, it is your job to write a synopsis for that title.\n",
"Title: {title}\n",
"Playwright: This is a synopsis for the above play:\"\"\"\n",
"prompt_template = PromptTemplate(input_variables=[\"title\"], template=template)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)\n",
"\n",
"test_prompts = [\n",
" {\"title\": \"documentary about good video games that push the boundary of game design\"},\n",
" {\"title\": \"the phenomenon behind the remarkable speed of cheetahs\"},\n",
" {\"title\": \"the best in class mlops tooling\"},\n",
"]\n",
"synopsis_chain.apply(test_prompts)\n",
"aim_callback.flush_tracker(\n",
" langchain_asset=synopsis_chain, experiment_name=\"scenario 3: Agent with Tools\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Scenario 3</h3> The third scenario involves an agent with tools."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "_jN73xcPVEpI"
},
"outputs": [],
"source": [
"from langchain.agents import initialize_agent, load_tools"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Gpq4rk6VT9cu",
"outputId": "68ae261e-d0a2-4229-83c4-762562263b66"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"Action: Search\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mLeonardo DiCaprio seemed to prove a long-held theory about his love life right after splitting from girlfriend Camila Morrone just months ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to find out Camila Morrone's age\n",
"Action: Search\n",
"Action Input: \"Camila Morrone age\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m25 years\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate 25 raised to the 0.43 power\n",
"Action: Calculator\n",
"Action Input: 25^0.43\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 3.991298452658078\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: Camila Morrone is Leo DiCaprio's girlfriend and her current age raised to the 0.43 power is 3.991298452658078.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"# scenario 3 - Agent with Tools\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callback_manager=manager)\n",
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=\"zero-shot-react-description\",\n",
" callback_manager=manager,\n",
" verbose=True,\n",
")\n",
"agent.run(\n",
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
")\n",
"aim_callback.flush_tracker(langchain_asset=agent, reset=False, finish=True)"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"provenance": []
},
"gpuClass": "standard",
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -1,46 +0,0 @@
# Apify
This page covers how to use [Apify](https://apify.com) within LangChain.
## Overview
Apify is a cloud platform for web scraping and data extraction,
which provides an [ecosystem](https://apify.com/store) of more than a thousand
ready-made apps called *Actors* for various scraping, crawling, and extraction use cases.
[![Apify Actors](../_static/ApifyActors.png)](https://apify.com/store)
This integration enables you run Actors on the Apify platform and load their results into LangChain to feed your vector
indexes with documents and data from the web, e.g. to generate answers from websites with documentation,
blogs, or knowledge bases.
## Installation and Setup
- Install the Apify API client for Python with `pip install apify-client`
- Get your [Apify API token](https://console.apify.com/account/integrations) and either set it as
an environment variable (`APIFY_API_TOKEN`) or pass it to the `ApifyWrapper` as `apify_api_token` in the constructor.
## Wrappers
### Utility
You can use the `ApifyWrapper` to run Actors on the Apify platform.
```python
from langchain.utilities import ApifyWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/apify.ipynb).
### Loader
You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
```python
from langchain.document_loaders import ApifyDatasetLoader
```
For a more detailed walkthrough of this loader, see [this notebook](../modules/indexes/document_loaders/examples/apify_dataset.ipynb).

View File

@@ -24,4 +24,4 @@ To import this vectorstore:
from langchain.vectorstores import AtlasDB
```
For a more detailed walkthrough of the AtlasDB wrapper, see [this notebook](../modules/indexes/vectorstores/examples/atlas.ipynb)
For a more detailed walkthrough of the AtlasDB wrapper, see [this notebook](../modules/indexes/vectorstore_examples/atlas.ipynb)

View File

@@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Chroma
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/vectorstores/getting_started.ipynb)
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

View File

@@ -1,588 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# ClearML Integration\n",
"\n",
"In order to properly keep track of your langchain experiments and their results, you can enable the ClearML integration. ClearML is an experiment manager that neatly tracks and organizes all your experiment runs.\n",
"\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/ecosystem/clearml_tracking.ipynb\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Getting API Credentials\n",
"\n",
"We'll be using quite some APIs in this notebook, here is a list and where to get them:\n",
"\n",
"- ClearML: https://app.clear.ml/settings/workspace-configuration\n",
"- OpenAI: https://platform.openai.com/account/api-keys\n",
"- SerpAPI (google search): https://serpapi.com/dashboard"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.environ[\"CLEARML_API_ACCESS_KEY\"] = \"\"\n",
"os.environ[\"CLEARML_API_SECRET_KEY\"] = \"\"\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"os.environ[\"SERPAPI_API_KEY\"] = \"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Setting Up"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install clearml\n",
"!pip install pandas\n",
"!pip install textstat\n",
"!pip install spacy\n",
"!python -m spacy download en_core_web_sm"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The clearml callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/allegroai/clearml/issues with the tag `langchain`.\n"
]
}
],
"source": [
"from datetime import datetime\n",
"from langchain.callbacks import ClearMLCallbackHandler, StdOutCallbackHandler\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.llms import OpenAI\n",
"\n",
"# Setup and use the ClearML Callback\n",
"clearml_callback = ClearMLCallbackHandler(\n",
" task_type=\"inference\",\n",
" project_name=\"langchain_callback_demo\",\n",
" task_name=\"llm\",\n",
" tags=[\"test\"],\n",
" # Change the following parameters based on the amount of detail you want tracked\n",
" visualize=True,\n",
" complexity_metrics=True,\n",
" stream_logs=True\n",
")\n",
"manager = CallbackManager([StdOutCallbackHandler(), clearml_callback])\n",
"# Get the OpenAI model ready to go\n",
"llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Scenario 1: Just an LLM\n",
"\n",
"First, let's just run a single LLM a few times and capture the resulting prompt-answer conversation in ClearML"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}\n",
"{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}\n",
"{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}\n",
"{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}\n",
"{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a joke'}\n",
"{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Tell me a poem'}\n",
"{'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\\n\\nQ: What did the fish say when it hit the wall?\\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91}\n",
"{'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\\n\\nRoses are red,\\nViolets are blue,\\nSugar is sweet,\\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17}\n",
"{'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\\n\\nQ: What did the fish say when it hit the wall?\\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91}\n",
"{'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\\n\\nRoses are red,\\nViolets are blue,\\nSugar is sweet,\\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17}\n",
"{'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\\n\\nQ: What did the fish say when it hit the wall?\\nA: Dam!', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 109.04, 'flesch_kincaid_grade': 1.3, 'smog_index': 0.0, 'coleman_liau_index': -1.24, 'automated_readability_index': 0.3, 'dale_chall_readability_score': 5.5, 'difficult_words': 0, 'linsear_write_formula': 5.5, 'gunning_fog': 5.2, 'text_standard': '5th and 6th grade', 'fernandez_huerta': 133.58, 'szigriszt_pazos': 131.54, 'gutierrez_polini': 62.3, 'crawford': -0.2, 'gulpease_index': 79.8, 'osman': 116.91}\n",
"{'action': 'on_llm_end', 'token_usage_prompt_tokens': 24, 'token_usage_completion_tokens': 138, 'token_usage_total_tokens': 162, 'model_name': 'text-davinci-003', 'step': 4, 'starts': 2, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 0, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': '\\n\\nRoses are red,\\nViolets are blue,\\nSugar is sweet,\\nAnd so are you.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 83.66, 'flesch_kincaid_grade': 4.8, 'smog_index': 0.0, 'coleman_liau_index': 3.23, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 6.71, 'difficult_words': 2, 'linsear_write_formula': 6.5, 'gunning_fog': 8.28, 'text_standard': '6th and 7th grade', 'fernandez_huerta': 115.58, 'szigriszt_pazos': 112.37, 'gutierrez_polini': 54.83, 'crawford': 1.4, 'gulpease_index': 72.1, 'osman': 100.17}\n",
"{'action_records': action name step starts ends errors text_ctr chain_starts \\\n",
"0 on_llm_start OpenAI 1 1 0 0 0 0 \n",
"1 on_llm_start OpenAI 1 1 0 0 0 0 \n",
"2 on_llm_start OpenAI 1 1 0 0 0 0 \n",
"3 on_llm_start OpenAI 1 1 0 0 0 0 \n",
"4 on_llm_start OpenAI 1 1 0 0 0 0 \n",
"5 on_llm_start OpenAI 1 1 0 0 0 0 \n",
"6 on_llm_end NaN 2 1 1 0 0 0 \n",
"7 on_llm_end NaN 2 1 1 0 0 0 \n",
"8 on_llm_end NaN 2 1 1 0 0 0 \n",
"9 on_llm_end NaN 2 1 1 0 0 0 \n",
"10 on_llm_end NaN 2 1 1 0 0 0 \n",
"11 on_llm_end NaN 2 1 1 0 0 0 \n",
"12 on_llm_start OpenAI 3 2 1 0 0 0 \n",
"13 on_llm_start OpenAI 3 2 1 0 0 0 \n",
"14 on_llm_start OpenAI 3 2 1 0 0 0 \n",
"15 on_llm_start OpenAI 3 2 1 0 0 0 \n",
"16 on_llm_start OpenAI 3 2 1 0 0 0 \n",
"17 on_llm_start OpenAI 3 2 1 0 0 0 \n",
"18 on_llm_end NaN 4 2 2 0 0 0 \n",
"19 on_llm_end NaN 4 2 2 0 0 0 \n",
"20 on_llm_end NaN 4 2 2 0 0 0 \n",
"21 on_llm_end NaN 4 2 2 0 0 0 \n",
"22 on_llm_end NaN 4 2 2 0 0 0 \n",
"23 on_llm_end NaN 4 2 2 0 0 0 \n",
"\n",
" chain_ends llm_starts ... difficult_words linsear_write_formula \\\n",
"0 0 1 ... NaN NaN \n",
"1 0 1 ... NaN NaN \n",
"2 0 1 ... NaN NaN \n",
"3 0 1 ... NaN NaN \n",
"4 0 1 ... NaN NaN \n",
"5 0 1 ... NaN NaN \n",
"6 0 1 ... 0.0 5.5 \n",
"7 0 1 ... 2.0 6.5 \n",
"8 0 1 ... 0.0 5.5 \n",
"9 0 1 ... 2.0 6.5 \n",
"10 0 1 ... 0.0 5.5 \n",
"11 0 1 ... 2.0 6.5 \n",
"12 0 2 ... NaN NaN \n",
"13 0 2 ... NaN NaN \n",
"14 0 2 ... NaN NaN \n",
"15 0 2 ... NaN NaN \n",
"16 0 2 ... NaN NaN \n",
"17 0 2 ... NaN NaN \n",
"18 0 2 ... 0.0 5.5 \n",
"19 0 2 ... 2.0 6.5 \n",
"20 0 2 ... 0.0 5.5 \n",
"21 0 2 ... 2.0 6.5 \n",
"22 0 2 ... 0.0 5.5 \n",
"23 0 2 ... 2.0 6.5 \n",
"\n",
" gunning_fog text_standard fernandez_huerta szigriszt_pazos \\\n",
"0 NaN NaN NaN NaN \n",
"1 NaN NaN NaN NaN \n",
"2 NaN NaN NaN NaN \n",
"3 NaN NaN NaN NaN \n",
"4 NaN NaN NaN NaN \n",
"5 NaN NaN NaN NaN \n",
"6 5.20 5th and 6th grade 133.58 131.54 \n",
"7 8.28 6th and 7th grade 115.58 112.37 \n",
"8 5.20 5th and 6th grade 133.58 131.54 \n",
"9 8.28 6th and 7th grade 115.58 112.37 \n",
"10 5.20 5th and 6th grade 133.58 131.54 \n",
"11 8.28 6th and 7th grade 115.58 112.37 \n",
"12 NaN NaN NaN NaN \n",
"13 NaN NaN NaN NaN \n",
"14 NaN NaN NaN NaN \n",
"15 NaN NaN NaN NaN \n",
"16 NaN NaN NaN NaN \n",
"17 NaN NaN NaN NaN \n",
"18 5.20 5th and 6th grade 133.58 131.54 \n",
"19 8.28 6th and 7th grade 115.58 112.37 \n",
"20 5.20 5th and 6th grade 133.58 131.54 \n",
"21 8.28 6th and 7th grade 115.58 112.37 \n",
"22 5.20 5th and 6th grade 133.58 131.54 \n",
"23 8.28 6th and 7th grade 115.58 112.37 \n",
"\n",
" gutierrez_polini crawford gulpease_index osman \n",
"0 NaN NaN NaN NaN \n",
"1 NaN NaN NaN NaN \n",
"2 NaN NaN NaN NaN \n",
"3 NaN NaN NaN NaN \n",
"4 NaN NaN NaN NaN \n",
"5 NaN NaN NaN NaN \n",
"6 62.30 -0.2 79.8 116.91 \n",
"7 54.83 1.4 72.1 100.17 \n",
"8 62.30 -0.2 79.8 116.91 \n",
"9 54.83 1.4 72.1 100.17 \n",
"10 62.30 -0.2 79.8 116.91 \n",
"11 54.83 1.4 72.1 100.17 \n",
"12 NaN NaN NaN NaN \n",
"13 NaN NaN NaN NaN \n",
"14 NaN NaN NaN NaN \n",
"15 NaN NaN NaN NaN \n",
"16 NaN NaN NaN NaN \n",
"17 NaN NaN NaN NaN \n",
"18 62.30 -0.2 79.8 116.91 \n",
"19 54.83 1.4 72.1 100.17 \n",
"20 62.30 -0.2 79.8 116.91 \n",
"21 54.83 1.4 72.1 100.17 \n",
"22 62.30 -0.2 79.8 116.91 \n",
"23 54.83 1.4 72.1 100.17 \n",
"\n",
"[24 rows x 39 columns], 'session_analysis': prompt_step prompts name output_step \\\n",
"0 1 Tell me a joke OpenAI 2 \n",
"1 1 Tell me a poem OpenAI 2 \n",
"2 1 Tell me a joke OpenAI 2 \n",
"3 1 Tell me a poem OpenAI 2 \n",
"4 1 Tell me a joke OpenAI 2 \n",
"5 1 Tell me a poem OpenAI 2 \n",
"6 3 Tell me a joke OpenAI 4 \n",
"7 3 Tell me a poem OpenAI 4 \n",
"8 3 Tell me a joke OpenAI 4 \n",
"9 3 Tell me a poem OpenAI 4 \n",
"10 3 Tell me a joke OpenAI 4 \n",
"11 3 Tell me a poem OpenAI 4 \n",
"\n",
" output \\\n",
"0 \\n\\nQ: What did the fish say when it hit the w... \n",
"1 \\n\\nRoses are red,\\nViolets are blue,\\nSugar i... \n",
"2 \\n\\nQ: What did the fish say when it hit the w... \n",
"3 \\n\\nRoses are red,\\nViolets are blue,\\nSugar i... \n",
"4 \\n\\nQ: What did the fish say when it hit the w... \n",
"5 \\n\\nRoses are red,\\nViolets are blue,\\nSugar i... \n",
"6 \\n\\nQ: What did the fish say when it hit the w... \n",
"7 \\n\\nRoses are red,\\nViolets are blue,\\nSugar i... \n",
"8 \\n\\nQ: What did the fish say when it hit the w... \n",
"9 \\n\\nRoses are red,\\nViolets are blue,\\nSugar i... \n",
"10 \\n\\nQ: What did the fish say when it hit the w... \n",
"11 \\n\\nRoses are red,\\nViolets are blue,\\nSugar i... \n",
"\n",
" token_usage_total_tokens token_usage_prompt_tokens \\\n",
"0 162 24 \n",
"1 162 24 \n",
"2 162 24 \n",
"3 162 24 \n",
"4 162 24 \n",
"5 162 24 \n",
"6 162 24 \n",
"7 162 24 \n",
"8 162 24 \n",
"9 162 24 \n",
"10 162 24 \n",
"11 162 24 \n",
"\n",
" token_usage_completion_tokens flesch_reading_ease flesch_kincaid_grade \\\n",
"0 138 109.04 1.3 \n",
"1 138 83.66 4.8 \n",
"2 138 109.04 1.3 \n",
"3 138 83.66 4.8 \n",
"4 138 109.04 1.3 \n",
"5 138 83.66 4.8 \n",
"6 138 109.04 1.3 \n",
"7 138 83.66 4.8 \n",
"8 138 109.04 1.3 \n",
"9 138 83.66 4.8 \n",
"10 138 109.04 1.3 \n",
"11 138 83.66 4.8 \n",
"\n",
" ... difficult_words linsear_write_formula gunning_fog \\\n",
"0 ... 0 5.5 5.20 \n",
"1 ... 2 6.5 8.28 \n",
"2 ... 0 5.5 5.20 \n",
"3 ... 2 6.5 8.28 \n",
"4 ... 0 5.5 5.20 \n",
"5 ... 2 6.5 8.28 \n",
"6 ... 0 5.5 5.20 \n",
"7 ... 2 6.5 8.28 \n",
"8 ... 0 5.5 5.20 \n",
"9 ... 2 6.5 8.28 \n",
"10 ... 0 5.5 5.20 \n",
"11 ... 2 6.5 8.28 \n",
"\n",
" text_standard fernandez_huerta szigriszt_pazos gutierrez_polini \\\n",
"0 5th and 6th grade 133.58 131.54 62.30 \n",
"1 6th and 7th grade 115.58 112.37 54.83 \n",
"2 5th and 6th grade 133.58 131.54 62.30 \n",
"3 6th and 7th grade 115.58 112.37 54.83 \n",
"4 5th and 6th grade 133.58 131.54 62.30 \n",
"5 6th and 7th grade 115.58 112.37 54.83 \n",
"6 5th and 6th grade 133.58 131.54 62.30 \n",
"7 6th and 7th grade 115.58 112.37 54.83 \n",
"8 5th and 6th grade 133.58 131.54 62.30 \n",
"9 6th and 7th grade 115.58 112.37 54.83 \n",
"10 5th and 6th grade 133.58 131.54 62.30 \n",
"11 6th and 7th grade 115.58 112.37 54.83 \n",
"\n",
" crawford gulpease_index osman \n",
"0 -0.2 79.8 116.91 \n",
"1 1.4 72.1 100.17 \n",
"2 -0.2 79.8 116.91 \n",
"3 1.4 72.1 100.17 \n",
"4 -0.2 79.8 116.91 \n",
"5 1.4 72.1 100.17 \n",
"6 -0.2 79.8 116.91 \n",
"7 1.4 72.1 100.17 \n",
"8 -0.2 79.8 116.91 \n",
"9 1.4 72.1 100.17 \n",
"10 -0.2 79.8 116.91 \n",
"11 1.4 72.1 100.17 \n",
"\n",
"[12 rows x 24 columns]}\n",
"2023-03-29 14:00:25,948 - clearml.Task - INFO - Completed model upload to https://files.clear.ml/langchain_callback_demo/llm.988bd727b0e94a29a3ac0ee526813545/models/simple_sequential\n"
]
}
],
"source": [
"# SCENARIO 1 - LLM\n",
"llm_result = llm.generate([\"Tell me a joke\", \"Tell me a poem\"] * 3)\n",
"# After every generation run, use flush to make sure all the metrics\n",
"# prompts and other output are properly saved separately\n",
"clearml_callback.flush_tracker(langchain_asset=llm, name=\"simple_sequential\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point you can already go to https://app.clear.ml and take a look at the resulting ClearML Task that was created.\n",
"\n",
"Among others, you should see that this notebook is saved along with any git information. The model JSON that contains the used parameters is saved as an artifact, there are also console logs and under the plots section, you'll find tables that represent the flow of the chain.\n",
"\n",
"Finally, if you enabled visualizations, these are stored as HTML files under debug samples."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Scenario 2: Creating a agent with tools\n",
"\n",
"To show a more advanced workflow, let's create an agent with access to tools. The way ClearML tracks the results is not different though, only the table will look slightly different as there are other types of actions taken when compared to the earlier, simpler example.\n",
"\n",
"You can now also see the use of the `finish=True` keyword, which will fully close the ClearML Task, instead of just resetting the parameters and prompts for a new conversation."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"{'action': 'on_chain_start', 'name': 'AgentExecutor', 'step': 1, 'starts': 1, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 0, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'input': 'Who is the wife of the person who sang summer of 69?'}\n",
"{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 2, 'starts': 2, 'ends': 0, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 0, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\\n\\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [Search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who is the wife of the person who sang summer of 69?\\nThought:'}\n",
"{'action': 'on_llm_end', 'token_usage_prompt_tokens': 189, 'token_usage_completion_tokens': 34, 'token_usage_total_tokens': 223, 'model_name': 'text-davinci-003', 'step': 3, 'starts': 2, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 0, 'tool_ends': 0, 'agent_ends': 0, 'text': ' I need to find out who sang summer of 69 and then find out who their wife is.\\nAction: Search\\nAction Input: \"Who sang summer of 69\"', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 91.61, 'flesch_kincaid_grade': 3.8, 'smog_index': 0.0, 'coleman_liau_index': 3.41, 'automated_readability_index': 3.5, 'dale_chall_readability_score': 6.06, 'difficult_words': 2, 'linsear_write_formula': 5.75, 'gunning_fog': 5.4, 'text_standard': '3rd and 4th grade', 'fernandez_huerta': 121.07, 'szigriszt_pazos': 119.5, 'gutierrez_polini': 54.91, 'crawford': 0.9, 'gulpease_index': 72.7, 'osman': 92.16}\n",
"\u001b[32;1m\u001b[1;3m I need to find out who sang summer of 69 and then find out who their wife is.\n",
"Action: Search\n",
"Action Input: \"Who sang summer of 69\"\u001b[0m{'action': 'on_agent_action', 'tool': 'Search', 'tool_input': 'Who sang summer of 69', 'log': ' I need to find out who sang summer of 69 and then find out who their wife is.\\nAction: Search\\nAction Input: \"Who sang summer of 69\"', 'step': 4, 'starts': 3, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 1, 'tool_ends': 0, 'agent_ends': 0}\n",
"{'action': 'on_tool_start', 'input_str': 'Who sang summer of 69', 'name': 'Search', 'description': 'A search engine. Useful for when you need to answer questions about current events. Input should be a search query.', 'step': 5, 'starts': 4, 'ends': 1, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 0, 'agent_ends': 0}\n",
"\n",
"Observation: \u001b[36;1m\u001b[1;3mBryan Adams - Summer Of 69 (Official Music Video).\u001b[0m\n",
"Thought:{'action': 'on_tool_end', 'output': 'Bryan Adams - Summer Of 69 (Official Music Video).', 'step': 6, 'starts': 4, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 1, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0}\n",
"{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 7, 'starts': 5, 'ends': 2, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 1, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\\n\\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [Search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who is the wife of the person who sang summer of 69?\\nThought: I need to find out who sang summer of 69 and then find out who their wife is.\\nAction: Search\\nAction Input: \"Who sang summer of 69\"\\nObservation: Bryan Adams - Summer Of 69 (Official Music Video).\\nThought:'}\n",
"{'action': 'on_llm_end', 'token_usage_prompt_tokens': 242, 'token_usage_completion_tokens': 28, 'token_usage_total_tokens': 270, 'model_name': 'text-davinci-003', 'step': 8, 'starts': 5, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 2, 'tool_ends': 1, 'agent_ends': 0, 'text': ' I need to find out who Bryan Adams is married to.\\nAction: Search\\nAction Input: \"Who is Bryan Adams married to\"', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 94.66, 'flesch_kincaid_grade': 2.7, 'smog_index': 0.0, 'coleman_liau_index': 4.73, 'automated_readability_index': 4.0, 'dale_chall_readability_score': 7.16, 'difficult_words': 2, 'linsear_write_formula': 4.25, 'gunning_fog': 4.2, 'text_standard': '4th and 5th grade', 'fernandez_huerta': 124.13, 'szigriszt_pazos': 119.2, 'gutierrez_polini': 52.26, 'crawford': 0.7, 'gulpease_index': 74.7, 'osman': 84.2}\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Bryan Adams is married to.\n",
"Action: Search\n",
"Action Input: \"Who is Bryan Adams married to\"\u001b[0m{'action': 'on_agent_action', 'tool': 'Search', 'tool_input': 'Who is Bryan Adams married to', 'log': ' I need to find out who Bryan Adams is married to.\\nAction: Search\\nAction Input: \"Who is Bryan Adams married to\"', 'step': 9, 'starts': 6, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 3, 'tool_ends': 1, 'agent_ends': 0}\n",
"{'action': 'on_tool_start', 'input_str': 'Who is Bryan Adams married to', 'name': 'Search', 'description': 'A search engine. Useful for when you need to answer questions about current events. Input should be a search query.', 'step': 10, 'starts': 7, 'ends': 3, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 1, 'agent_ends': 0}\n",
"\n",
"Observation: \u001b[36;1m\u001b[1;3mBryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ...\u001b[0m\n",
"Thought:{'action': 'on_tool_end', 'output': 'Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ...', 'step': 11, 'starts': 7, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 2, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0}\n",
"{'action': 'on_llm_start', 'name': 'OpenAI', 'step': 12, 'starts': 8, 'ends': 4, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 2, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0, 'prompts': 'Answer the following questions as best you can. You have access to the following tools:\\n\\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [Search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: Who is the wife of the person who sang summer of 69?\\nThought: I need to find out who sang summer of 69 and then find out who their wife is.\\nAction: Search\\nAction Input: \"Who sang summer of 69\"\\nObservation: Bryan Adams - Summer Of 69 (Official Music Video).\\nThought: I need to find out who Bryan Adams is married to.\\nAction: Search\\nAction Input: \"Who is Bryan Adams married to\"\\nObservation: Bryan Adams has never married. In the 1990s, he was in a relationship with Danish model Cecilie Thomsen. In 2011, Bryan and Alicia Grimaldi, his ...\\nThought:'}\n",
"{'action': 'on_llm_end', 'token_usage_prompt_tokens': 314, 'token_usage_completion_tokens': 18, 'token_usage_total_tokens': 332, 'model_name': 'text-davinci-003', 'step': 13, 'starts': 8, 'ends': 5, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 0, 'text': ' I now know the final answer.\\nFinal Answer: Bryan Adams has never been married.', 'generation_info_finish_reason': 'stop', 'generation_info_logprobs': None, 'flesch_reading_ease': 81.29, 'flesch_kincaid_grade': 3.7, 'smog_index': 0.0, 'coleman_liau_index': 5.75, 'automated_readability_index': 3.9, 'dale_chall_readability_score': 7.37, 'difficult_words': 1, 'linsear_write_formula': 2.5, 'gunning_fog': 2.8, 'text_standard': '3rd and 4th grade', 'fernandez_huerta': 115.7, 'szigriszt_pazos': 110.84, 'gutierrez_polini': 49.79, 'crawford': 0.7, 'gulpease_index': 85.4, 'osman': 83.14}\n",
"\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Bryan Adams has never been married.\u001b[0m\n",
"{'action': 'on_agent_finish', 'output': 'Bryan Adams has never been married.', 'log': ' I now know the final answer.\\nFinal Answer: Bryan Adams has never been married.', 'step': 14, 'starts': 8, 'ends': 6, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 0, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 1}\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"{'action': 'on_chain_end', 'outputs': 'Bryan Adams has never been married.', 'step': 15, 'starts': 8, 'ends': 7, 'errors': 0, 'text_ctr': 0, 'chain_starts': 1, 'chain_ends': 1, 'llm_starts': 3, 'llm_ends': 3, 'llm_streams': 0, 'tool_starts': 4, 'tool_ends': 2, 'agent_ends': 1}\n",
"{'action_records': action name step starts ends errors text_ctr \\\n",
"0 on_llm_start OpenAI 1 1 0 0 0 \n",
"1 on_llm_start OpenAI 1 1 0 0 0 \n",
"2 on_llm_start OpenAI 1 1 0 0 0 \n",
"3 on_llm_start OpenAI 1 1 0 0 0 \n",
"4 on_llm_start OpenAI 1 1 0 0 0 \n",
".. ... ... ... ... ... ... ... \n",
"66 on_tool_end NaN 11 7 4 0 0 \n",
"67 on_llm_start OpenAI 12 8 4 0 0 \n",
"68 on_llm_end NaN 13 8 5 0 0 \n",
"69 on_agent_finish NaN 14 8 6 0 0 \n",
"70 on_chain_end NaN 15 8 7 0 0 \n",
"\n",
" chain_starts chain_ends llm_starts ... gulpease_index osman input \\\n",
"0 0 0 1 ... NaN NaN NaN \n",
"1 0 0 1 ... NaN NaN NaN \n",
"2 0 0 1 ... NaN NaN NaN \n",
"3 0 0 1 ... NaN NaN NaN \n",
"4 0 0 1 ... NaN NaN NaN \n",
".. ... ... ... ... ... ... ... \n",
"66 1 0 2 ... NaN NaN NaN \n",
"67 1 0 3 ... NaN NaN NaN \n",
"68 1 0 3 ... 85.4 83.14 NaN \n",
"69 1 0 3 ... NaN NaN NaN \n",
"70 1 1 3 ... NaN NaN NaN \n",
"\n",
" tool tool_input log \\\n",
"0 NaN NaN NaN \n",
"1 NaN NaN NaN \n",
"2 NaN NaN NaN \n",
"3 NaN NaN NaN \n",
"4 NaN NaN NaN \n",
".. ... ... ... \n",
"66 NaN NaN NaN \n",
"67 NaN NaN NaN \n",
"68 NaN NaN NaN \n",
"69 NaN NaN I now know the final answer.\\nFinal Answer: B... \n",
"70 NaN NaN NaN \n",
"\n",
" input_str description output \\\n",
"0 NaN NaN NaN \n",
"1 NaN NaN NaN \n",
"2 NaN NaN NaN \n",
"3 NaN NaN NaN \n",
"4 NaN NaN NaN \n",
".. ... ... ... \n",
"66 NaN NaN Bryan Adams has never married. In the 1990s, h... \n",
"67 NaN NaN NaN \n",
"68 NaN NaN NaN \n",
"69 NaN NaN Bryan Adams has never been married. \n",
"70 NaN NaN NaN \n",
"\n",
" outputs \n",
"0 NaN \n",
"1 NaN \n",
"2 NaN \n",
"3 NaN \n",
"4 NaN \n",
".. ... \n",
"66 NaN \n",
"67 NaN \n",
"68 NaN \n",
"69 NaN \n",
"70 Bryan Adams has never been married. \n",
"\n",
"[71 rows x 47 columns], 'session_analysis': prompt_step prompts name \\\n",
"0 2 Answer the following questions as best you can... OpenAI \n",
"1 7 Answer the following questions as best you can... OpenAI \n",
"2 12 Answer the following questions as best you can... OpenAI \n",
"\n",
" output_step output \\\n",
"0 3 I need to find out who sang summer of 69 and ... \n",
"1 8 I need to find out who Bryan Adams is married... \n",
"2 13 I now know the final answer.\\nFinal Answer: B... \n",
"\n",
" token_usage_total_tokens token_usage_prompt_tokens \\\n",
"0 223 189 \n",
"1 270 242 \n",
"2 332 314 \n",
"\n",
" token_usage_completion_tokens flesch_reading_ease flesch_kincaid_grade \\\n",
"0 34 91.61 3.8 \n",
"1 28 94.66 2.7 \n",
"2 18 81.29 3.7 \n",
"\n",
" ... difficult_words linsear_write_formula gunning_fog \\\n",
"0 ... 2 5.75 5.4 \n",
"1 ... 2 4.25 4.2 \n",
"2 ... 1 2.50 2.8 \n",
"\n",
" text_standard fernandez_huerta szigriszt_pazos gutierrez_polini \\\n",
"0 3rd and 4th grade 121.07 119.50 54.91 \n",
"1 4th and 5th grade 124.13 119.20 52.26 \n",
"2 3rd and 4th grade 115.70 110.84 49.79 \n",
"\n",
" crawford gulpease_index osman \n",
"0 0.9 72.7 92.16 \n",
"1 0.7 74.7 84.20 \n",
"2 0.7 85.4 83.14 \n",
"\n",
"[3 rows x 24 columns]}\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Could not update last created model in Task 988bd727b0e94a29a3ac0ee526813545, Task status 'completed' cannot be updated\n"
]
}
],
"source": [
"from langchain.agents import initialize_agent, load_tools\n",
"\n",
"# SCENARIO 2 - Agent with Tools\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callback_manager=manager)\n",
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=\"zero-shot-react-description\",\n",
" callback_manager=manager,\n",
" verbose=True,\n",
")\n",
"agent.run(\n",
" \"Who is the wife of the person who sang summer of 69?\"\n",
")\n",
"clearml_callback.flush_tracker(langchain_asset=agent, name=\"Agent with Tools\", finish=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tips and Next Steps\n",
"\n",
"- Make sure you always use a unique `name` argument for the `clearml_callback.flush_tracker` function. If not, the model parameters used for a run will override the previous run!\n",
"\n",
"- If you close the ClearML Callback using `clearml_callback.flush_tracker(..., finish=True)` the Callback cannot be used anymore. Make a new one if you want to keep logging.\n",
"\n",
"- Check out the rest of the open source ClearML ecosystem, there is a data version manager, a remote execution agent, automated pipelines and much more!\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "a53ebf4a859167383b364e7e7521d0add3c2dbbdecce4edf676e8c4634ff3fbb"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -22,4 +22,4 @@ There exists an Cohere Embeddings wrapper, which you can access with
```python
from langchain.embeddings import CohereEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/cohere.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)

View File

@@ -22,4 +22,4 @@ from langchain.vectorstores import DeepLake
```
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](../modules/indexes/vectorstores/examples/deeplake.ipynb)
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](../modules/indexes/vectorstore_examples/deeplake.ipynb)

View File

@@ -18,7 +18,7 @@ There exists a GoogleSearchAPIWrapper utility which wraps this API. To import th
from langchain.utilities import GoogleSearchAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/google_search.ipynb).
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_search.ipynb).
### Tool
@@ -29,4 +29,4 @@ from langchain.agents import load_tools
tools = load_tools(["google-search"])
```
For more information on this, see [this page](../modules/agents/tools/getting_started.md)
For more information on this, see [this page](../modules/agents/tools.md)

View File

@@ -34,8 +34,7 @@ search = GoogleSerperAPIWrapper()
tools = [
Tool(
name="Intermediate Answer",
func=search.run,
description="useful for when you need to ask with search"
func=search.run
)
]
@@ -58,7 +57,7 @@ So the final answer is: El Palmar, Spain
'El Palmar, Spain'
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/google_serper.ipynb).
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/google_serper.ipynb).
### Tool
@@ -69,4 +68,4 @@ from langchain.agents import load_tools
tools = load_tools(["google-serper"])
```
For more information on this, see [this page](../modules/agents/tools/getting_started.md)
For more information on this, see [this page](../modules/agents/tools.md)

View File

@@ -30,7 +30,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.llms import HuggingFaceHub
```
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](../modules/models/llms/integrations/huggingface_hub.ipynb)
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](../modules/llms/integrations/huggingface_hub.ipynb)
### Embeddings
@@ -47,7 +47,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.embeddings import HuggingFaceHubEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/huggingfacehub.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
### Tokenizer
@@ -59,7 +59,7 @@ You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_huggingface_tokenizer(...)
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/text_splitters/examples/huggingface_length_function.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
### Datasets

View File

@@ -1,18 +0,0 @@
# Jina
This page covers how to use the Jina ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Jina wrappers.
## Installation and Setup
- Install the Python SDK with `pip install jina`
- Get a Jina AI Cloud auth token from [here](https://cloud.jina.ai/settings/tokens) and set it as an environment variable (`JINA_AUTH_TOKEN`)
## Wrappers
### Embeddings
There exists a Jina Embeddings wrapper, which you can access with
```python
from langchain.embeddings import JinaEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)

View File

@@ -1,20 +0,0 @@
# Milvus
This page covers how to use the Milvus ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Milvus wrappers.
## Installation and Setup
- Install the Python SDK with `pip install pymilvus`
## Wrappers
### VectorStore
There exists a wrapper around Milvus indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Milvus
```
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](../modules/indexes/vectorstores/examples/milvus.ipynb)

View File

@@ -21,7 +21,7 @@ If you are using a model hosted on Azure, you should use different wrapper for t
```python
from langchain.llms import AzureOpenAI
```
For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/models/llms/integrations/azure_openai_example.ipynb)
For a more detailed walkthrough of the Azure wrapper, see [this notebook](../modules/llms/integrations/azure_openai_example.ipynb)
@@ -31,7 +31,7 @@ There exists an OpenAI Embeddings wrapper, which you can access with
```python
from langchain.embeddings import OpenAIEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/openai.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
### Tokenizer
@@ -44,7 +44,7 @@ You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
```
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/text_splitters/examples/tiktoken.ipynb)
For a more detailed walkthrough of this, see [this notebook](../modules/indexes/examples/textsplitter.ipynb)
### Moderation
You can also access the OpenAI content moderation endpoint with

View File

@@ -18,4 +18,4 @@ To import this vectorstore:
from langchain.vectorstores import OpenSearchVectorSearch
```
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstores/examples/opensearch.ipynb)
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstore_examples/opensearch.ipynb)

View File

@@ -26,4 +26,4 @@ from langchain.vectorstores.pgvector import PGVector
### Usage
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](../modules/indexes/vectorstores/examples/pgvector.ipynb)
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](../modules/indexes/vectorstore_examples/pgvector.ipynb)

View File

@@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Pinecone
```
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/vectorstores/examples/pinecone.ipynb)
For a more detailed walkthrough of the Pinecone wrapper, see [this notebook](../modules/indexes/vectorstore_examples/pinecone.ipynb)

View File

@@ -25,25 +25,9 @@ from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
```
To get the PromptLayer request id, use the argument `return_pl_id` when instanializing the LLM
```python
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(return_pl_id=True)
```
This will add the PromptLayer request ID in the `generation_info` field of the `Generation` returned when using `.generate` or `.agenerate`
For example:
```python
llm_results = llm.generate(["hello world"])
for res in llm_results.generations:
print("pl request id: ", res[0].generation_info["pl_request_id"])
```
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. [Read more about it here](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
This LLM is identical to the [OpenAI LLM](./openai.md), except that
This LLM is identical to the [OpenAI LLM](./openai), except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/models/chat/integrations/promptlayer_chatopenai.ipynb) and `PromptLayerOpenAIChat`
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/chat/examples/promptlayer_chat_openai.ipynb)

View File

@@ -1,20 +0,0 @@
# Qdrant
This page covers how to use the Qdrant ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Qdrant wrappers.
## Installation and Setup
- Install the Python SDK with `pip install qdrant-client`
## Wrappers
### VectorStore
There exists a wrapper around Qdrant indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.
To import this vectorstore:
```python
from langchain.vectorstores import Qdrant
```
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](../modules/indexes/vectorstores/examples/qdrant.ipynb)

View File

@@ -1,47 +0,0 @@
# Replicate
This page covers how to run models on Replicate within LangChain.
## Installation and Setup
- Create a [Replicate](https://replicate.com) account. Get your API key and set it as an environment variable (`REPLICATE_API_TOKEN`)
- Install the [Replicate python client](https://github.com/replicate/replicate-python) with `pip install replicate`
## Calling a model
Find a model on the [Replicate explore page](https://replicate.com/explore), and then paste in the model name and version in this format: `owner-name/model-name:version`
For example, for this [flan-t5 model](https://replicate.com/daanelson/flan-t5), click on the API tab. The model name/version would be: `daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8`
Only the `model` param is required, but any other model parameters can also be passed in with the format `input={model_param: value, ...}`
For example, if we were running stable diffusion and wanted to change the image dimensions:
```
Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})
```
*Note that only the first output of a model will be returned.*
From here, we can initialize our model:
```python
llm = Replicate(model="daanelson/flan-t5:04e422a9b85baed86a4f24981d7f9953e20c5fd82f6103b74ebc431588e1cec8")
```
And run it:
```python
prompt = """
Answer the following yes/no question by reasoning step by step.
Can a dog drive a car?
"""
llm(prompt)
```
We can call any Replicate model (not just LLMs) using this syntax. For example, we can call [Stable Diffusion](https://replicate.com/stability-ai/stable-diffusion):
```python
text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf",
input={'image_dimensions'='512x512'}
image_output = text2image("A cat riding a motorcycle by Picasso")
```

View File

@@ -15,7 +15,7 @@ custom LLMs, you can use the `SelfHostedPipeline` parent class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/models/llms/integrations/self_hosted_examples.ipynb)
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/llms/integrations/self_hosted_examples.ipynb)
## Self-hosted Embeddings
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
@@ -26,4 +26,6 @@ the `SelfHostedEmbedding` class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/models/text_embedding/examples/self-hosted.ipynb)
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/indexes/examples/embeddings.ipynb)
##

View File

@@ -47,24 +47,12 @@ s.run("what is a large language model?")
### Tool
You can also load this wrapper as a Tool (to use with an Agent).
You can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:
```python
from langchain.agents import load_tools
tools = load_tools(["searx-search"],
searx_host="http://localhost:8888",
engines=["github"])
tools = load_tools(["searx-search"], searx_host="http://localhost:8888")
```
Note that we could _optionally_ pass custom engines to use.
If you want to obtain results with metadata as *json* you can use:
```python
tools = load_tools(["searx-search-results-json"],
searx_host="http://localhost:8888",
num_results=5)
```
For more information on tools, see [this page](../modules/agents/tools/getting_started.md)
For more information on tools, see [this page](../modules/agents/tools.md)

View File

@@ -17,7 +17,7 @@ There exists a SerpAPI utility which wraps this API. To import this utility:
from langchain.utilities import SerpAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/serpapi.ipynb).
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/serpapi.ipynb).
### Tool
@@ -28,4 +28,4 @@ from langchain.agents import load_tools
tools = load_tools(["serpapi"])
```
For more information on this, see [this page](../modules/agents/tools/getting_started.md)
For more information on this, see [this page](../modules/agents/tools.md)

View File

@@ -13,11 +13,10 @@ This page is broken into two parts: installation and setup, and then references
- Install the Python SDK with `pip install "unstructured[local-inference]"`
- Install the following system dependencies if they are not already available on your system.
Depending on what document types you're parsing, you may not need all of these.
- `libmagic-dev` (filetype detection)
- `poppler-utils` (images and PDFs)
- `tesseract-ocr`(images and PDFs)
- `libreoffice` (MS Office docs)
- `pandoc` (EPUBs)
- `libmagic-dev`
- `poppler-utils`
- `tesseract-ocr`
- `libreoffice`
- If you are parsing PDFs using the `"hi_res"` strategy, run the following to install the `detectron2` model, which
`unstructured` uses for layout detection:
- `pip install "detectron2@git+https://github.com/facebookresearch/detectron2.git@v0.6#egg=detectron2"`

View File

@@ -1,625 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Weights & Biases\n",
"\n",
"This notebook goes over how to track your LangChain experiments into one centralized Weights and Biases dashboard. To learn more about prompt engineering and the callback please refer to this Report which explains both alongside the resultant dashboards you can expect to see.\n",
"\n",
"Run in Colab: https://colab.research.google.com/drive/1DXH4beT4HFaRKy_Vm4PoxhXVDRf7Ym8L?usp=sharing\n",
"\n",
"View Report: https://wandb.ai/a-sh0ts/langchain_callback_demo/reports/Prompt-Engineering-LLMs-with-LangChain-and-W-B--VmlldzozNjk1NTUw#👋-how-to-build-a-callback-in-langchain-for-better-prompt-engineering"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install wandb\n",
"!pip install pandas\n",
"!pip install textstat\n",
"!pip install spacy\n",
"!python -m spacy download en_core_web_sm"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "T1bSmKd6V2If"
},
"outputs": [],
"source": [
"import os\n",
"os.environ[\"WANDB_API_KEY\"] = \"\"\n",
"# os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"# os.environ[\"SERPAPI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "8WAGnTWpUUnD"
},
"outputs": [],
"source": [
"from datetime import datetime\n",
"from langchain.callbacks import WandbCallbackHandler, StdOutCallbackHandler\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```\n",
"Callback Handler that logs to Weights and Biases.\n",
"\n",
"Parameters:\n",
" job_type (str): The type of job.\n",
" project (str): The project to log to.\n",
" entity (str): The entity to log to.\n",
" tags (list): The tags to log.\n",
" group (str): The group to log to.\n",
" name (str): The name of the run.\n",
" notes (str): The notes to log.\n",
" visualize (bool): Whether to visualize the run.\n",
" complexity_metrics (bool): Whether to log complexity metrics.\n",
" stream_logs (bool): Whether to stream callback actions to W&B\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cxBFfZR8d9FC"
},
"source": [
"```\n",
"Default values for WandbCallbackHandler(...)\n",
"\n",
"visualize: bool = False,\n",
"complexity_metrics: bool = False,\n",
"stream_logs: bool = False,\n",
"```\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"NOTE: For beta workflows we have made the default analysis based on textstat and the visualizations based on spacy"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "KAz8weWuUeXF"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mharrison-chase\u001b[0m. Use \u001b[1m`wandb login --relogin`\u001b[0m to force relogin\n"
]
},
{
"data": {
"text/html": [
"Tracking run with wandb version 0.14.0"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150408-e47j1914</code>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target=\"_blank\">llm</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/run' target=\"_blank\">docs</a>)<br/>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
" View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo</a>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
" View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914</a>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m The wandb callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.\n"
]
}
],
"source": [
"\"\"\"Main function.\n",
"\n",
"This function is used to try the callback handler.\n",
"Scenarios:\n",
"1. OpenAI LLM\n",
"2. Chain with multiple SubChains on multiple generations\n",
"3. Agent with Tools\n",
"\"\"\"\n",
"session_group = datetime.now().strftime(\"%m.%d.%Y_%H.%M.%S\")\n",
"wandb_callback = WandbCallbackHandler(\n",
" job_type=\"inference\",\n",
" project=\"langchain_callback_demo\",\n",
" group=f\"minimal_{session_group}\",\n",
" name=\"llm\",\n",
" tags=[\"test\"],\n",
")\n",
"manager = CallbackManager([StdOutCallbackHandler(), wandb_callback])\n",
"llm = OpenAI(temperature=0, callback_manager=manager, verbose=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Q-65jwrDeK6w"
},
"source": [
"\n",
"\n",
"```\n",
"# Defaults for WandbCallbackHandler.flush_tracker(...)\n",
"\n",
"reset: bool = True,\n",
"finish: bool = False,\n",
"```\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `flush_tracker` function is used to log LangChain sessions to Weights & Biases. It takes in the LangChain module or agent, and logs at minimum the prompts and generations alongside the serialized form of the LangChain module to the specified Weights & Biases project. By default we reset the session as opposed to concluding the session outright."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "o_VmneyIUyx8"
},
"outputs": [
{
"data": {
"text/html": [
"Waiting for W&B process to finish... <strong style=\"color:green\">(success).</strong>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
" View run <strong style=\"color:#cdcd00\">llm</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914</a><br/>Synced 5 W&B file(s), 2 media file(s), 5 artifact file(s) and 0 other file(s)"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Find logs at: <code>./wandb/run-20230318_150408-e47j1914/logs</code>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "0d7b4307ccdb450ea631497174fca2d1",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"VBox(children=(Label(value='Waiting for wandb.init()...\\r'), FloatProgress(value=0.016745895149999985, max=1.0…"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Tracking run with wandb version 0.14.0"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150534-jyxma7hu</code>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target=\"_blank\">simple_sequential</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/run' target=\"_blank\">docs</a>)<br/>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
" View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo</a>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
" View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu</a>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# SCENARIO 1 - LLM\n",
"llm_result = llm.generate([\"Tell me a joke\", \"Tell me a poem\"] * 3)\n",
"wandb_callback.flush_tracker(llm, name=\"simple_sequential\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"id": "trxslyb1U28Y"
},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "uauQk10SUzF6"
},
"outputs": [
{
"data": {
"text/html": [
"Waiting for W&B process to finish... <strong style=\"color:green\">(success).</strong>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
" View run <strong style=\"color:#cdcd00\">simple_sequential</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu</a><br/>Synced 4 W&B file(s), 2 media file(s), 6 artifact file(s) and 0 other file(s)"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Find logs at: <code>./wandb/run-20230318_150534-jyxma7hu/logs</code>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "dbdbf28fb8ed40a3a60218d2e6d1a987",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"VBox(children=(Label(value='Waiting for wandb.init()...\\r'), FloatProgress(value=0.016736786816666675, max=1.0…"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Tracking run with wandb version 0.14.0"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150550-wzy59zjq</code>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target=\"_blank\">agent</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/run' target=\"_blank\">docs</a>)<br/>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
" View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo</a>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
" View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq</a>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# SCENARIO 2 - Chain\n",
"template = \"\"\"You are a playwright. Given the title of play, it is your job to write a synopsis for that title.\n",
"Title: {title}\n",
"Playwright: This is a synopsis for the above play:\"\"\"\n",
"prompt_template = PromptTemplate(input_variables=[\"title\"], template=template)\n",
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)\n",
"\n",
"test_prompts = [\n",
" {\n",
" \"title\": \"documentary about good video games that push the boundary of game design\"\n",
" },\n",
" {\"title\": \"cocaine bear vs heroin wolf\"},\n",
" {\"title\": \"the best in class mlops tooling\"},\n",
"]\n",
"synopsis_chain.apply(test_prompts)\n",
"wandb_callback.flush_tracker(synopsis_chain, name=\"agent\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "_jN73xcPVEpI"
},
"outputs": [],
"source": [
"from langchain.agents import initialize_agent, load_tools"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"id": "Gpq4rk6VT9cu"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.\n",
"Action: Search\n",
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mDiCaprio had a steady girlfriend in Camila Morrone. He had been with the model turned actress for nearly five years, as they were first said to be dating at the end of 2017. And the now 26-year-old Morrone is no stranger to Hollywood.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to calculate her age raised to the 0.43 power.\n",
"Action: Calculator\n",
"Action Input: 26^0.43\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.059182145592686\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: Leo DiCaprio's girlfriend is Camila Morrone and her current age raised to the 0.43 power is 4.059182145592686.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/html": [
"Waiting for W&B process to finish... <strong style=\"color:green\">(success).</strong>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
" View run <strong style=\"color:#cdcd00\">agent</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target=\"_blank\">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq</a><br/>Synced 5 W&B file(s), 2 media file(s), 7 artifact file(s) and 0 other file(s)"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"Find logs at: <code>./wandb/run-20230318_150550-wzy59zjq/logs</code>"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# SCENARIO 3 - Agent with Tools\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callback_manager=manager)\n",
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=\"zero-shot-react-description\",\n",
" callback_manager=manager,\n",
" verbose=True,\n",
")\n",
"agent.run(\n",
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
")\n",
"wandb_callback.flush_tracker(agent, reset=False, finish=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -30,4 +30,4 @@ To import this vectorstore:
from langchain.vectorstores import Weaviate
```
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/indexes/vectorstores/getting_started.ipynb)
For a more detailed walkthrough of the Weaviate wrapper, see [this notebook](../modules/indexes/examples/vectorstores.ipynb)

View File

@@ -20,7 +20,7 @@ There exists a WolframAlphaAPIWrapper utility which wraps this API. To import th
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/agents/tools/examples/wolfram_alpha.ipynb).
For a more detailed walkthrough of this wrapper, see [this notebook](../modules/utils/examples/wolfram_alpha.ipynb).
### Tool
@@ -31,4 +31,4 @@ from langchain.agents import load_tools
tools = load_tools(["wolfram-alpha"])
```
For more information on this, see [this page](../modules/agents/tools/getting_started.md)
For more information on this, see [this page](../modules/agents/tools.md)

View File

@@ -158,14 +158,14 @@ Open Source
---
.. link-button:: https://github.com/jerryjliu/llama_index
.. link-button:: https://github.com/jerryjliu/gpt_index
:type: url
:text: LlamaIndex
:text: GPT Index
:classes: stretched-link btn-lg
+++
LlamaIndex (formerly GPT Index) is a project consisting of a set of data structures that are created using GPT-3 and can be traversed using GPT-3 in order to answer queries.
GPT Index is a project consisting of a set of data structures that are created using GPT-3 and can be traversed using GPT-3 in order to answer queries.
---
@@ -322,14 +322,5 @@ Proprietary
By Zahid Khawaja, this demo utilizes question answering to answer questions about a given website. A followup added this for `YouTube videos <https://twitter.com/chillzaza_/status/1593739682013220865?s=20&t=EhU8jl0KyCPJ7vE9Rnz-cQ>`_, and then another followup added it for `Wikipedia <https://twitter.com/chillzaza_/status/1594847151238037505?s=20&t=EhU8jl0KyCPJ7vE9Rnz-cQ>`_.
---
.. link-button:: https://mynd.so
:type: url
:text: Mynd
:classes: stretched-link btn-lg
+++
A journaling app for self-care that uses AI to uncover insights and patterns over time.

View File

@@ -36,7 +36,7 @@ os.environ["OPENAI_API_KEY"] = "..."
```
## Building a Language Model Application: LLMs
## Building a Language Model Application
Now that we have installed LangChain and set up our environment, we can start building our language model application.
@@ -74,7 +74,7 @@ print(llm(text))
Feetful of Fun
```
For more details on how to use LLMs within LangChain, see the [LLM getting started guide](../modules/models/llms/getting_started.ipynb).
For more details on how to use LLMs within LangChain, see the [LLM getting started guide](../modules/llms/getting_started.ipynb).
`````
@@ -111,7 +111,7 @@ What is a good name for a company that makes colorful socks?
```
[For more details, check out the getting started guide for prompts.](../modules/prompts/chat_prompt_template.ipynb)
[For more details, check out the getting started guide for prompts.](../modules/prompts/getting_started.ipynb)
`````
@@ -160,7 +160,7 @@ This is one of the simpler types of chains, but understanding how it works will
`````
`````{dropdown} Agents: Dynamically Call Chains Based on User Input
`````{dropdown} Agents: Dynamically call chains based on user input
So far the chains we've looked at run in a predetermined order.
@@ -210,31 +210,35 @@ tools = load_tools(["serpapi", "llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
# Now let's test it out!
agent.run("What was the high temperature in SF yesterday in Fahrenheit? What is that number raised to the .023 power?")
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
```
```pycon
> Entering new AgentExecutor chain...
I need to find the temperature first, then use the calculator to raise it to the .023 power.
Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "High temperature in SF yesterday"
Observation: San Francisco Temperature Yesterday. Maximum temperature yesterday: 57 °F (at 1:56 pm) Minimum temperature yesterday: 49 °F (at 1:56 am) Average temperature ...
Thought: I now have the temperature, so I can use the calculator to raise it to the .023 power.
Action Input: "Olivia Wilde boyfriend"
Observation: Jason Sudeikis
Thought: I need to find out Jason Sudeikis' age
Action: Search
Action Input: "Jason Sudeikis age"
Observation: 47 years
Thought: I need to calculate 47 raised to the 0.23 power
Action: Calculator
Action Input: 57^.023
Observation: Answer: 1.0974509573251117
Action Input: 47^0.23
Observation: Answer: 2.4242784855673896
Thought: I now know the final answer
Final Answer: The high temperature in SF yesterday in Fahrenheit raised to the .023 power is 1.0974509573251117.
> Finished chain.
Final Answer: Jason Sudeikis, Olivia Wilde's boyfriend, is 47 years old and his age raised to the 0.23 power is 2.4242784855673896.
> Finished AgentExecutor chain.
"Jason Sudeikis, Olivia Wilde's boyfriend, is 47 years old and his age raised to the 0.23 power is 2.4242784855673896."
```
`````
`````{dropdown} Memory: Add State to Chains and Agents
`````{dropdown} Memory: Add state to chains and agents
So far, all the chains and agents we've gone through have been stateless. But often, you may want a chain or agent to have some concept of "memory" so that it may remember information about its previous interactions. The clearest and simple example of this is when designing a chatbot - you want it to remember previous messages so it can use context from that to have a better conversation. This would be a type of "short-term memory". On the more complex side, you could imagine a chain/agent remembering key pieces of information over time - this would be a form of "long-term memory". For more concrete ideas on the latter, see this [awesome paper](https://memprompt.com/).
@@ -283,218 +287,4 @@ AI:
> Finished chain.
" That's great! What would you like to talk about?"
```
`````
## Building a Language Model Application: Chat Models
Similarly, you can use chat models instead of LLMs. Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
Chat model APIs are fairly new, so we are still figuring out the correct abstractions.
`````{dropdown} Get Message Completions from a Chat Model
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`.
```python
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
```
You can get completions by passing in a single message.
```python
chat([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
```
You can also pass in multiple messages for OpenAI's gpt-3.5-turbo and gpt-4 models.
```python
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
chat(messages)
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
```
You can go one step further and generate completions for multiple sets of messages using `generate`. This returns an `LLMResult` with an additional `message` parameter:
```python
batch_messages = [
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
],
[
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love artificial intelligence.")
],
]
result = chat.generate(batch_messages)
result
# -> LLMResult(generations=[[ChatGeneration(text="J'aime programmer.", generation_info=None, message=AIMessage(content="J'aime programmer.", additional_kwargs={}))], [ChatGeneration(text="J'aime l'intelligence artificielle.", generation_info=None, message=AIMessage(content="J'aime l'intelligence artificielle.", additional_kwargs={}))]], llm_output={'token_usage': {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}})
```
You can recover things like token usage from this LLMResult:
```
result.llm_output['token_usage']
# -> {'prompt_tokens': 71, 'completion_tokens': 18, 'total_tokens': 89}
```
`````
`````{dropdown} Chat Prompt Templates
Similar to LLMs, you can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplate`s. You can use `ChatPromptTemplate`'s `format_prompt` -- this returns a `PromptValue`, which you can convert to a string or `Message` object, depending on whether you want to use the formatted value as input to an llm or chat model.
For convience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like:
```python
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# get a chat completion from the formatted messages
chat(chat_prompt.format_prompt(input_language="English", output_language="French", text="I love programming.").to_messages())
# -> AIMessage(content="J'aime programmer.", additional_kwargs={})
```
`````
`````{dropdown} Chains with Chat Models
The `LLMChain` discussed in the above section can be used with chat models as well:
```python
from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
chat = ChatOpenAI(temperature=0)
template="You are a helpful assistant that translates {input_language} to {output_language}."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
human_template="{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
chain = LLMChain(llm=chat, prompt=chat_prompt)
chain.run(input_language="English", output_language="French", text="I love programming.")
# -> "J'aime programmer."
```
`````
`````{dropdown} Agents with Chat Models
Agents can also be used with chat models, you can initialize one using `"chat-zero-shot-react-description"` as the agent type.
```python
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
# First, let's load the language model we're going to use to control the agent.
chat = ChatOpenAI(temperature=0)
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
llm = OpenAI(temperature=0)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(tools, chat, agent="chat-zero-shot-react-description", verbose=True)
# Now let's test it out!
agent.run("Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?")
```
```pycon
> Entering new AgentExecutor chain...
Thought: I need to use a search engine to find Olivia Wilde's boyfriend and a calculator to raise his age to the 0.23 power.
Action:
{
"action": "Search",
"action_input": "Olivia Wilde boyfriend"
}
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought:I need to use a search engine to find Harry Styles' current age.
Action:
{
"action": "Search",
"action_input": "Harry Styles age"
}
Observation: 29 years
Thought:Now I need to calculate 29 raised to the 0.23 power.
Action:
{
"action": "Calculator",
"action_input": "29^0.23"
}
Observation: Answer: 2.169459462491557
Thought:I now know the final answer.
Final Answer: 2.169459462491557
> Finished chain.
'2.169459462491557'
```
`````
`````{dropdown} Memory: Add State to Chains and Agents
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
```python
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate
)
from langchain.chains import ConversationChain
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template("The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know."),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}")
])
llm = ChatOpenAI(temperature=0)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)
conversation.predict(input="Hi there!")
# -> 'Hello! How can I assist you today?'
conversation.predict(input="I'm doing well! Just having a conversation with an AI.")
# -> "That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?"
conversation.predict(input="Tell me about yourself.")
# -> "Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?"
```
`````
```

View File

@@ -32,7 +32,7 @@ This induces the to model to think about what action to take, then take it.
Resources:
- [Paper](https://arxiv.org/pdf/2210.03629.pdf)
- [LangChain Example](modules/agents/agents/examples/react.ipynb)
- [LangChain Example](./modules/agents/implementations/react.ipynb)
## Self-ask
@@ -42,7 +42,7 @@ In this method, the model explicitly asks itself follow-up questions, which are
Resources:
- [Paper](https://ofir.io/self-ask.pdf)
- [LangChain Example](modules/agents/agents/examples/self_ask_with_search.ipynb)
- [LangChain Example](./modules/agents/implementations/self_ask_with_search.ipynb)
## Prompt Chaining

View File

@@ -1,14 +1,28 @@
Welcome to LangChain
==========================
LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:
Large language models (LLMs) are emerging as a transformative technology, enabling
developers to build applications that they previously could not.
But using these LLMs in isolation is often not enough to
create a truly powerful app - the real power comes when you are able to
combine them with other sources of computation or knowledge.
- *Be data-aware*: connect a language model to other sources of data
- *Be agentic*: allow a language model to interact with its environment
This library is aimed at assisting in the development of those types of applications. Common examples of these types of applications include:
The LangChain framework is designed with the above principles in mind.
**❓ Question Answering over specific documents**
This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see `here <https://docs.langchain.com/docs/>`_. For the JavaScript documentation, see `here <https://js.langchain.com/docs/>`_.
- `Documentation <./use_cases/question_answering.html>`_
- End-to-end Example: `Question Answering over Notion Database <https://github.com/hwchase17/notion-qa>`_
**💬 Chatbots**
- `Documentation <./use_cases/chatbots.html>`_
- End-to-end Example: `Chat-LangChain <https://github.com/hwchase17/chat-langchain>`_
**🤖 Agents**
- `Documentation <./use_cases/agents.html>`_
- End-to-end Example: `GPT+WolframAlpha <https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain>`_
Getting Started
----------------
@@ -32,18 +46,25 @@ There are several main modules that LangChain provides support for.
For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.
These modules are, in increasing order of complexity:
- `Models <./modules/models.html>`_: The various model types and model integrations LangChain supports.
- `Prompts <./modules/prompts.html>`_: This includes prompt management, prompt optimization, and prompt serialization.
- `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
- `LLMs <./modules/llms.html>`_: This includes a generic interface for all LLMs, and common utilities for working with LLMs.
- `Indexes <./modules/indexes.html>`_: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.
- `Document Loaders <./modules/document_loaders.html>`_: This includes a standard interface for loading documents, as well as specific integrations to all types of text data sources.
- `Utils <./modules/utils.html>`_: Language models are often more powerful when interacting with other sources of knowledge or computation. This can include Python REPLs, embeddings, search engines, and more. LangChain provides a large collection of common utils to use in your application.
- `Chains <./modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
- `Indexes <./modules/indexes.html>`_: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.
- `Agents <./modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
- `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
- `Chat <./modules/chat.html>`_: Chat models are a variation on Language Models that expose a different API - rather than working with raw text, they work with messages. LangChain provides a standard interface for working with them and doing all the same things as above.
.. toctree::
:maxdepth: 1
@@ -51,34 +72,38 @@ These modules are, in increasing order of complexity:
:name: modules
:hidden:
./modules/models.rst
./modules/prompts.rst
./modules/prompts.md
./modules/llms.md
./modules/document_loaders.md
./modules/utils.md
./modules/indexes.md
./modules/memory.md
./modules/chains.md
./modules/agents.md
./modules/memory.md
./modules/chat.md
Use Cases
----------
The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.
- `Personal Assistants <./use_cases/personal_assistants.html>`_: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
- `Question Answering <./use_cases/question_answering.html>`_: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
- `Agents <./use_cases/agents.html>`_: Agents are systems that use a language model to interact with other tools. These can be used to do more grounded question/answering, interact with APIs, or even take actions.
- `Chatbots <./use_cases/chatbots.html>`_: Since language models are good at producing text, that makes them ideal for creating chatbots.
- `Querying Tabular Data <./use_cases/tabular.html>`_: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.
- `Data Augmented Generation <./use_cases/combine_docs.html>`_: Data Augmented Generation involves specific types of chains that first interact with an external datasource to fetch data to use in the generation step. Examples of this include summarization of long pieces of text and question/answering over specific data sources.
- `Interacting with APIs <./use_cases/apis.html>`_: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.
- `Extraction <./use_cases/extraction.html>`_: Extract structured information from text.
- `Question Answering <./use_cases/question_answering.html>`_: Answering questions over specific documents, only utilizing the information in those documents to construct an answer. A type of Data Augmented Generation.
- `Summarization <./use_cases/summarization.html>`_: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.
- `Evaluation <./use_cases/evaluation.html>`_: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
- `Generate similar examples <./use_cases/generate_examples.html>`_: Generating similar examples to a given input. This is a common use case for many applications, and LangChain provides some prompts/chains for assisting in this.
- `Compare models <./use_cases/model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
.. toctree::
:maxdepth: 1
@@ -86,14 +111,14 @@ The above modules can be used in a variety of ways. LangChain also provides guid
:name: use_cases
:hidden:
./use_cases/personal_assistants.md
./use_cases/question_answering.md
./use_cases/agents.md
./use_cases/chatbots.md
./use_cases/tabular.rst
./use_cases/apis.md
./use_cases/generate_examples.ipynb
./use_cases/combine_docs.md
./use_cases/question_answering.md
./use_cases/summarization.md
./use_cases/extraction.md
./use_cases/evaluation.rst
./use_cases/model_laboratory.ipynb
Reference Docs
@@ -144,12 +169,10 @@ Additional collection of resources we think may be useful as you develop your ap
- `Deployments <./deployments.html>`_: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
- `Tracing <./tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
- `Model Laboratory <./model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
- `Discord <https://discord.gg/6adMQxSpJS>`_: Join us on our Discord to discuss all things LangChain!
- `Tracing <./tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
- `Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>`_: As you move your LangChains into production, we'd love to offer more comprehensive support. Please fill out this form and we'll set up a dedicated support Slack channel.
@@ -164,6 +187,5 @@ Additional collection of resources we think may be useful as you develop your ap
./gallery.rst
./deployments.md
./tracing.md
./use_cases/model_laboratory.ipynb
Discord <https://discord.gg/6adMQxSpJS>
Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>

View File

@@ -1,52 +1,30 @@
Agents
==========================
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents>`_
Some applications will require not just a predetermined chain of calls to LLMs/other tools,
but potentially an unknown chain that depends on the user's input.
In these types of chains, there is a “agent” which has access to a suite of tools.
Depending on the user input, the agent can then decide which, if any, of these tools to call.
In this section of documentation, we first start with a Getting Started notebook to over over how to use all things related to agents in an end-to-end manner.
The following sections of documentation are provided:
- `Getting Started <./agents/getting_started.html>`_: A notebook to help you get started working with agents as quickly as possible.
- `Key Concepts <./agents/key_concepts.html>`_: A conceptual guide going over the various concepts related to agents.
- `How-To Guides <./agents/how_to_guides.html>`_: A collection of how-to guides. These highlight how to integrate various types of tools, how to work with different types of agents, and how to customize agents.
- `Reference <../reference/modules/agents.html>`_: API reference documentation for all Agent classes.
.. toctree::
:maxdepth: 1
:caption: Agents
:name: Agents
:hidden:
./agents/getting_started.ipynb
We then split the documentation into the following sections:
**Tools**
An overview of the various tools LangChain supports.
**Agents**
An overview of the different agent types.
**Toolkits**
An overview of toolkits, and examples of the different ones LangChain supports.
**Agent Executor**
An overview of the Agent Executor class and examples of how to use it.
Go Deeper
---------
.. toctree::
:maxdepth: 1
./agents/tools.rst
./agents/agents.rst
./agents/toolkits.rst
./agents/agent_executors.rst
./agents/key_concepts.md
./agents/how_to_guides.rst
Reference<../reference/modules/agents.rst>

View File

@@ -1,17 +0,0 @@
Agent Executors
===============
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/agent-executor>`_
Agent executors take an agent and tools and use the agent to decide which tools to call and in what order.
In this part of the documentation we cover other related functionality to agent executors
.. toctree::
:maxdepth: 1
:glob:
./agent_executors/examples/*

View File

@@ -5,7 +5,7 @@
"id": "82a4c2cc-20ea-4b20-a565-63e905dee8ff",
"metadata": {},
"source": [
"# Python Agent\n",
"## Python Agent\n",
"\n",
"This notebook showcases an agent designed to write and execute python code to answer a question."
]

View File

@@ -36,7 +36,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "345bb078-4ec1-4e3a-827b-cd238c49054d",
"metadata": {
"tags": []
@@ -53,7 +53,7 @@
],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../../state_of_the_union.txt')\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
@@ -247,7 +247,7 @@
" vectorstores=[vectorstore_info, ruff_vectorstore_info],\n",
" llm=llm\n",
")\n",
"agent_executor = create_vectorstore_router_agent(\n",
"agent_executor = create_vectorstore_agent(\n",
" llm=llm,\n",
" toolkit=router_toolkit,\n",
" verbose=True\n",
@@ -409,7 +409,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.9"
}
},
"nbformat": 4,

View File

@@ -1,9 +1,12 @@
# Agent Types
# Agents
Agents use an LLM to determine which actions to take and in what order.
An action can either be using a tool and observing its output, or returning a response to the user.
For a list of easily loadable tools, see [here](tools.md).
Here are the agents available in LangChain.
For a tutorial on how to load agents, see [here](getting_started.ipynb).
## `zero-shot-react-description`
This agent uses the ReAct framework to determine which tool to use

View File

@@ -1,35 +0,0 @@
Agents
=============
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/agent>`_
In this part of the documentation we cover the different types of agents, disregarding which specific tools they are used with.
For a high level overview of the different types of agents, see the below documentation.
.. toctree::
:maxdepth: 1
:glob:
./agents/agent_types.md
For documentation on how to create a custom agent, see the below.
We also have documentation for an in-depth dive into each agent type.
.. toctree::
:maxdepth: 1
:glob:
./agents/custom_agent.ipynb
We also have documentation for an in-depth dive into each agent type.
.. toctree::
:maxdepth: 1
:glob:
./agents/examples/*

View File

@@ -1,123 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "82140df0",
"metadata": {},
"source": [
"# ReAct\n",
"\n",
"This notebook showcases using an agent to implement the ReAct logic."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "4e272b47",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, Wikipedia\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents.react.base import DocstoreExplorer\n",
"docstore=DocstoreExplorer(Wikipedia())\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=docstore.search,\n",
" description=\"useful for when you need to ask with search\"\n",
" ),\n",
" Tool(\n",
" name=\"Lookup\",\n",
" func=docstore.lookup,\n",
" description=\"useful for when you need to ask with lookup\"\n",
" )\n",
"]\n",
"\n",
"llm = OpenAI(temperature=0, model_name=\"text-davinci-002\")\n",
"react = initialize_agent(tools, llm, agent=\"react-docstore\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8078c8f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Thought: I need to search David Chanoff and find the U.S. Navy admiral he collaborated with. Then I need to find which President the admiral served under.\n",
"\n",
"Action: Search[David Chanoff]\n",
"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe. I need to find which President he served under.\n",
"\n",
"Action: Search[William J. Crowe]\n",
"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m William J. Crowe served as the ambassador to the United Kingdom under President Bill Clinton, so the answer is Bill Clinton.\n",
"\n",
"Action: Finish[Bill Clinton]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Bill Clinton'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\"\n",
"react.run(question)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "09604a7f",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -5,7 +5,7 @@
"id": "68b24990",
"metadata": {},
"source": [
"# How to combine agents and vectorstores\n",
"# Agents and Vectorstores\n",
"\n",
"This notebook covers how to combine agents and vectorstores. The use case for this is that you've ingested your data into a vectorstore and want to interact with it in an agentic manner.\n",
"\n",
@@ -22,7 +22,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 20,
"id": "2e87c10a",
"metadata": {},
"outputs": [],
@@ -30,30 +30,13 @@
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains import RetrievalQA\n",
"from langchain import OpenAI, VectorDBQA\n",
"llm = OpenAI(temperature=0)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "0b7b772b",
"metadata": {},
"outputs": [],
"source": [
"from pathlib import Path\n",
"relevant_parts = []\n",
"for p in Path(\".\").absolute().parts:\n",
" relevant_parts.append(p)\n",
" if relevant_parts[-3:] == [\"langchain\", \"docs\", \"modules\"]:\n",
" break\n",
"doc_path = str(Path(*relevant_parts) / \"state_of_the_union.txt\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 37,
"id": "f2675861",
"metadata": {},
"outputs": [
@@ -68,7 +51,7 @@
],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader(doc_path)\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
@@ -79,17 +62,17 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 38,
"id": "bc5403d4",
"metadata": {},
"outputs": [],
"source": [
"state_of_union = RetrievalQA.from_chain_type(llm=llm, chain_type=\"stuff\", retriever=docsearch.as_retriever())"
"state_of_union = VectorDBQA.from_chain_type(llm=llm, chain_type=\"stuff\", vectorstore=docsearch)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 39,
"id": "1431cded",
"metadata": {},
"outputs": [],
@@ -99,7 +82,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 40,
"id": "915d3ff3",
"metadata": {},
"outputs": [],
@@ -109,7 +92,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 41,
"id": "96a2edf8",
"metadata": {},
"outputs": [
@@ -126,7 +109,7 @@
"docs = loader.load()\n",
"ruff_texts = text_splitter.split_documents(docs)\n",
"ruff_db = Chroma.from_documents(ruff_texts, embeddings, collection_name=\"ruff\")\n",
"ruff = RetrievalQA.from_chain_type(llm=llm, chain_type=\"stuff\", retriever=ruff_db.as_retriever())"
"ruff = VectorDBQA.from_chain_type(llm=llm, chain_type=\"stuff\", vectorstore=ruff_db)"
]
},
{
@@ -281,9 +264,9 @@
"id": "9161ba91",
"metadata": {},
"source": [
"You can also set `return_direct=True` if you intend to use the agent as a router and just want to directly return the result of the RetrievalQAChain.\n",
"You can also set `return_direct=True` if you intend to use the agent as a router and just want to directly return the result of the VectorDBQaChain.\n",
"\n",
"Notice that in the above examples the agent did some extra work after querying the RetrievalQAChain. You can avoid that and just return the result directly."
"Notice that in the above examples the agent did some extra work after querying the VectorDBQAChain. You can avoid that and just return the result directly."
]
},
{

View File

@@ -5,7 +5,7 @@
"id": "6fb92deb-d89e-439b-855d-c7f2607d794b",
"metadata": {},
"source": [
"# How to use the async API for Agents\n",
"# Async API for Agent\n",
"\n",
"LangChain provides async support for Agents by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n",
@@ -403,7 +403,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.9"
}
},
"nbformat": 4,

View File

@@ -13,7 +13,7 @@
" \n",
" - Tools: The tools the agent has available to use.\n",
" - LLMChain: The LLMChain that produces the text that is parsed in a certain way to determine which action to take.\n",
" - The agent class itself: this parses the output of the LLMChain to determine which action to take.\n",
" - The agent class itself: this parses the output of the LLMChain to determin which action to take.\n",
" \n",
" \n",
"In this notebook we walk through two types of custom agents. The first type shows how to create a custom LLMChain, but still use an existing agent class to parse the output. The second shows how to create a custom agent class."

View File

@@ -5,7 +5,7 @@
"id": "5436020b",
"metadata": {},
"source": [
"# How to access intermediate steps\n",
"# Intermediate Steps\n",
"\n",
"In order to get more visibility into what an agent is doing, we can also return intermediate steps. This comes in the form of an extra key in the return value, which is a list of (action, observation) tuples."
]

View File

@@ -0,0 +1,130 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "991b1cc1",
"metadata": {},
"source": [
"# Loading from LangChainHub\n",
"\n",
"This notebook covers how to load agents from [LangChainHub](https://github.com/hwchase17/langchain-hub)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "bd4450a2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"No `_type` key found, defaulting to `prompt`.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3m Yes.\n",
"Follow up: Who is the reigning men's U.S. Open champion?\u001B[0m\n",
"Intermediate answer: \u001B[36;1m\u001B[1;3m2016 · SUI · Stan Wawrinka ; 2017 · ESP · Rafael Nadal ; 2018 · SRB · Novak Djokovic ; 2019 · ESP · Rafael Nadal.\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mSo the reigning men's U.S. Open champion is Rafael Nadal.\n",
"Follow up: What is Rafael Nadal's hometown?\u001B[0m\n",
"Intermediate answer: \u001B[36;1m\u001B[1;3mIn 2016, he once again showed his deep ties to Mallorca and opened the Rafa Nadal Academy in his hometown of Manacor.\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mSo the final answer is: Manacor, Mallorca, Spain.\u001B[0m\n",
"\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
"data": {
"text/plain": [
"'Manacor, Mallorca, Spain.'"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain import OpenAI, SerpAPIWrapper\n",
"from langchain.agents import initialize_agent, Tool\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name=\"Intermediate Answer\",\n",
" func=search.run\n",
" )\n",
"]\n",
"\n",
"self_ask_with_search = initialize_agent(tools, llm, agent_path=\"lc://agents/self-ask-with-search/agent.json\", verbose=True)\n",
"self_ask_with_search.run(\"What is the hometown of the reigning men's U.S. Open champion?\")"
]
},
{
"cell_type": "markdown",
"id": "3aede965",
"metadata": {},
"source": [
"# Pinning Dependencies\n",
"\n",
"Specific versions of LangChainHub agents can be pinned with the `lc@<ref>://` syntax."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e679f7b6",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"No `_type` key found, defaulting to `prompt`.\n"
]
}
],
"source": [
"self_ask_with_search = initialize_agent(tools, llm, agent_path=\"lc@2826ef9e8acdf88465e1e5fc8a7bf59e0f9d0a85://agents/self-ask-with-search/agent.json\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9d3d6697",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -5,7 +5,7 @@
"id": "75c041b7",
"metadata": {},
"source": [
"# How to cap the max number of iterations\n",
"# Max Iterations\n",
"\n",
"This notebook walks through how to cap an agent at taking a certain number of steps. This can be useful to ensure that they do not go haywire and take too many steps."
]

View File

@@ -0,0 +1,154 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "bfe18e28",
"metadata": {},
"source": [
"# Serialization\n",
"\n",
"This notebook goes over how to serialize agents. For this notebook, it is important to understand the distinction we draw between `agents` and `tools`. An agent is the LLM powered decision maker that decides which actions to take and in which order. Tools are various instruments (functions) an agent has access to, through which an agent can interact with the outside world. When people generally use agents, they primarily talk about using an agent WITH tools. However, when we talk about serialization of agents, we are talking about the agent by itself. We plan to add support for serializing an agent WITH tools sometime in the future.\n",
"\n",
"Let's start by creating an agent with tools as we normally do:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "eb729f16",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import load_tools\n",
"from langchain.agents import initialize_agent\n",
"from langchain.llms import OpenAI\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{
"cell_type": "markdown",
"id": "0578f566",
"metadata": {},
"source": [
"Let's now serialize the agent. To be explicit that we are serializing ONLY the agent, we will call the `save_agent` method."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dc544de6",
"metadata": {},
"outputs": [],
"source": [
"agent.save_agent('agent.json')"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62dd45bf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\r\n",
" \"llm_chain\": {\r\n",
" \"memory\": null,\r\n",
" \"verbose\": false,\r\n",
" \"prompt\": {\r\n",
" \"input_variables\": [\r\n",
" \"input\",\r\n",
" \"agent_scratchpad\"\r\n",
" ],\r\n",
" \"output_parser\": null,\r\n",
" \"template\": \"Answer the following questions as best you can. You have access to the following tools:\\n\\nSearch: A search engine. Useful for when you need to answer questions about current events. Input should be a search query.\\nCalculator: Useful for when you need to answer questions about math.\\n\\nUse the following format:\\n\\nQuestion: the input question you must answer\\nThought: you should always think about what to do\\nAction: the action to take, should be one of [Search, Calculator]\\nAction Input: the input to the action\\nObservation: the result of the action\\n... (this Thought/Action/Action Input/Observation can repeat N times)\\nThought: I now know the final answer\\nFinal Answer: the final answer to the original input question\\n\\nBegin!\\n\\nQuestion: {input}\\nThought:{agent_scratchpad}\",\r\n",
" \"template_format\": \"f-string\",\r\n",
" \"validate_template\": true,\r\n",
" \"_type\": \"prompt\"\r\n",
" },\r\n",
" \"llm\": {\r\n",
" \"model_name\": \"text-davinci-003\",\r\n",
" \"temperature\": 0.0,\r\n",
" \"max_tokens\": 256,\r\n",
" \"top_p\": 1,\r\n",
" \"frequency_penalty\": 0,\r\n",
" \"presence_penalty\": 0,\r\n",
" \"n\": 1,\r\n",
" \"best_of\": 1,\r\n",
" \"request_timeout\": null,\r\n",
" \"logit_bias\": {},\r\n",
" \"_type\": \"openai\"\r\n",
" },\r\n",
" \"output_key\": \"text\",\r\n",
" \"_type\": \"llm_chain\"\r\n",
" },\r\n",
" \"allowed_tools\": [\r\n",
" \"Search\",\r\n",
" \"Calculator\"\r\n",
" ],\r\n",
" \"return_values\": [\r\n",
" \"output\"\r\n",
" ],\r\n",
" \"_type\": \"zero-shot-react-description\"\r\n",
"}"
]
}
],
"source": [
"!cat agent.json"
]
},
{
"cell_type": "markdown",
"id": "0eb72510",
"metadata": {},
"source": [
"We can now load the agent back in"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "eb660b76",
"metadata": {},
"outputs": [],
"source": [
"agent = initialize_agent(tools, llm, agent_path=\"agent.json\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aa624ea5",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,11 +1,12 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "fa6802ac",
"metadata": {},
"source": [
"# How to add SharedMemory to an Agent and its Tools\n",
"# Adding SharedMemory to an Agent and its Tools\n",
"\n",
"This notebook goes over adding memory to **both** of an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n",
"\n",
@@ -259,6 +260,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "4ebd8326",
"metadata": {},
@@ -290,6 +292,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "cc3d0aa4",
"metadata": {},
@@ -490,6 +493,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "d07415da",
"metadata": {},
@@ -540,7 +544,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.9"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,87 @@
"""Run NatBot."""
import time
from langchain.chains.natbot.base import NatBotChain
from langchain.chains.natbot.crawler import Crawler
def run_cmd(cmd: str, _crawler: Crawler) -> None:
"""Run command."""
cmd = cmd.split("\n")[0]
if cmd.startswith("SCROLL UP"):
_crawler.scroll("up")
elif cmd.startswith("SCROLL DOWN"):
_crawler.scroll("down")
elif cmd.startswith("CLICK"):
commasplit = cmd.split(",")
id = commasplit[0].split(" ")[1]
_crawler.click(id)
elif cmd.startswith("TYPE"):
spacesplit = cmd.split(" ")
id = spacesplit[1]
text_pieces = spacesplit[2:]
text = " ".join(text_pieces)
# Strip leading and trailing double quotes
text = text[1:-1]
if cmd.startswith("TYPESUBMIT"):
text += "\n"
_crawler.type(id, text)
time.sleep(2)
if __name__ == "__main__":
objective = "Make a reservation for 2 at 7pm at bistro vida in menlo park"
print("\nWelcome to natbot! What is your objective?")
i = input()
if len(i) > 0:
objective = i
quiet = False
nat_bot_chain = NatBotChain.from_default(objective)
_crawler = Crawler()
_crawler.go_to_page("google.com")
try:
while True:
browser_content = "\n".join(_crawler.crawl())
llm_command = nat_bot_chain.execute(_crawler.page.url, browser_content)
if not quiet:
print("URL: " + _crawler.page.url)
print("Objective: " + objective)
print("----------------\n" + browser_content + "\n----------------\n")
if len(llm_command) > 0:
print("Suggested command: " + llm_command)
command = input()
if command == "r" or command == "":
run_cmd(llm_command, _crawler)
elif command == "g":
url = input("URL:")
_crawler.go_to_page(url)
elif command == "u":
_crawler.scroll("up")
time.sleep(1)
elif command == "d":
_crawler.scroll("down")
time.sleep(1)
elif command == "c":
id = input("id:")
_crawler.click(id)
time.sleep(1)
elif command == "t":
id = input("id:")
text = input("text:")
_crawler.type(id, text)
time.sleep(1)
elif command == "o":
objective = input("Objective:")
else:
print(
"(g) to visit url\n(u) scroll up\n(d) scroll down\n(c) to click"
"\n(t) to type\n(h) to view commands again"
"\n(r/enter) to run suggested command\n(o) change objective"
)
except KeyboardInterrupt:
print("\n[!] Ctrl+C detected, exiting gracefully.")
exit(0)

View File

@@ -0,0 +1,108 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "82140df0",
"metadata": {},
"source": [
"# ReAct\n",
"\n",
"This notebook showcases using an agent to implement the ReAct logic."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4e272b47",
"metadata": {},
"outputs": [],
"source": [
"from langchain import OpenAI, Wikipedia\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents.react.base import DocstoreExplorer\n",
"docstore=DocstoreExplorer(Wikipedia())\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=docstore.search\n",
" ),\n",
" Tool(\n",
" name=\"Lookup\",\n",
" func=docstore.lookup\n",
" )\n",
"]\n",
"\n",
"llm = OpenAI(temperature=0, model_name=\"text-davinci-002\")\n",
"react = initialize_agent(tools, llm, agent=\"react-docstore\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8078c8f1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Thought 1: I need to search David Chanoff and find the U.S. Navy admiral he collaborated\n",
"with.\n",
"Action 1: Search[David Chanoff]\u001b[0m\n",
"Observation 1: \u001b[36;1m\u001b[1;3mDavid Chanoff is a noted author of non-fiction work. His work has typically involved collaborations with the principal protagonist of the work concerned. His collaborators have included; Augustus A. White, Joycelyn Elders, Đoàn Văn Toại, William J. Crowe, Ariel Sharon, Kenneth Good and Felix Zandman. He has also written about a wide range of subjects including literary history, education and foreign for The Washington Post, The New Republic and The New York Times Magazine. He has published more than twelve books.\u001b[0m\n",
"Thought 2:\u001b[32;1m\u001b[1;3m The U.S. Navy admiral David Chanoff collaborated with is William J. Crowe.\n",
"Action 2: Search[William J. Crowe]\u001b[0m\n",
"Observation 2: \u001b[36;1m\u001b[1;3mWilliam James Crowe Jr. (January 2, 1925 October 18, 2007) was a United States Navy admiral and diplomat who served as the 11th chairman of the Joint Chiefs of Staff under Presidents Ronald Reagan and George H. W. Bush, and as the ambassador to the United Kingdom and Chair of the Intelligence Oversight Board under President Bill Clinton.\u001b[0m\n",
"Thought 3:\u001b[32;1m\u001b[1;3m The President William J. Crowe served as the ambassador to the United Kingdom under is Bill Clinton.\n",
"Action 3: Finish[Bill Clinton]\u001b[0m\n",
"\u001b[1m> Finished AgentExecutor chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'Bill Clinton'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"question = \"Author David Chanoff has collaborated with a U.S. Navy admiral who served as the ambassador to the United Kingdom under which President?\"\n",
"react.run(question)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -52,8 +52,7 @@
"tools = [\n",
" Tool(\n",
" name=\"Intermediate Answer\",\n",
" func=search.run,\n",
" description=\"useful for when you need to ask with search\"\n",
" func=search.run\n",
" )\n",
"]\n",
"\n",

View File

@@ -0,0 +1,15 @@
# Key Concepts
## Agents
Agents use an LLM to determine which actions to take and in what order.
For more detailed information on agents, and different types of agents in LangChain, see [this documentation](agents.md).
## Tools
Tools are functions that agents can use to interact with the world.
These tools can be generic utilities (e.g. search), other chains, or even other agents.
For more detailed information on tools, and different types of tools in LangChain, see [this documentation](tools.md).
## ToolKits
Toolkits are groups of tools that are best used together.
They allow you to logically group and initialize a set of tools that share a particular resource (such as a database connection or json object).
They can be used to construct an agent for a specific use-case.

View File

@@ -1,18 +0,0 @@
Toolkits
==============
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/toolkit>`_
This section of documentation covers agents with toolkits - eg an agent applied to a particular use case.
See below for a full list of agent toolkits
.. toctree::
:maxdepth: 1
:glob:
./toolkits/examples/*

View File

@@ -1,4 +1,4 @@
# Getting Started
# Tools
Tools are functions that agents can use to interact with the world.
These tools can be generic utilities (e.g. search), other chains, or even other agents.
@@ -118,7 +118,7 @@ Below is a list of all supported tools and relevant information:
- Notes: Uses the Google Custom Search API
- Requires LLM: No
- Extra Parameters: `google_api_key`, `google_cse_id`
- For more information on this, see [this page](../../../ecosystem/google_search.md)
- For more information on this, see [this page](../../ecosystem/google_search.md)
**searx-search**
@@ -135,7 +135,7 @@ Below is a list of all supported tools and relevant information:
- Notes: Calls the [serper.dev](https://serper.dev) Google Search API and then parses results.
- Requires LLM: No
- Extra Parameters: `serper_api_key`
- For more information on this, see [this page](../../../ecosystem/google_serper.md)
- For more information on this, see [this page](../../ecosystem/google_serper.md)
**wikipedia**
@@ -145,18 +145,3 @@ Below is a list of all supported tools and relevant information:
- Requires LLM: No
- Extra Parameters: `top_k_results`
**podcast-api**
- Tool Name: Podcast API
- Tool Description: Use the Listen Notes Podcast API to search all podcasts or episodes. The input should be a question in natural language that this API can answer.
- Notes: A natural language connection to the Listen Notes Podcast API (`https://www.PodcastAPI.com`), specifically the `/search/` endpoint.
- Requires LLM: Yes
- Extra Parameters: `listen_api_key` (your api key to access this endpoint)
**openweathermap-api**
- Tool Name: OpenWeatherMap
- Tool Description: A wrapper around OpenWeatherMap API. Useful for fetching current weather information for a specified location. Input should be a location string (e.g. 'London,GB').
- Notes: A connection to the OpenWeatherMap API (https://api.openweathermap.org), specifically the `/data/2.5/weather` endpoint.
- Requires LLM: No
- Extra Parameters: `openweathermap_api_key` (your API key to access this endpoint)

View File

@@ -1,38 +0,0 @@
Tools
=============
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/agents/tool>`_
Tools are ways that an agent can use to interact with the outside world.
For an overview of what a tool is, how to use them, and a full list of examples, please see the getting started documentation
.. toctree::
:maxdepth: 1
:glob:
./tools/getting_started.md
Next, we have some examples of customizing and generically working with tools
.. toctree::
:maxdepth: 1
:glob:
./tools/custom_tools.ipynb
./tools/multi_input_tool.ipynb
In this documentation we cover generic tooling functionality (eg how to create your own)
as well as examples of tools and how to use them.
.. toctree::
:maxdepth: 1
:glob:
./tools/examples/*

View File

@@ -1,164 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Apify\n",
"\n",
"This notebook shows how to use the [Apify integration](../../../../ecosystem/apify.md) for LangChain.\n",
"\n",
"[Apify](https://apify.com) is a cloud platform for web scraping and data extraction,\n",
"which provides an [ecosystem](https://apify.com/store) of more than a thousand\n",
"ready-made apps called *Actors* for various web scraping, crawling, and data extraction use cases.\n",
"For example, you can use it to extract Google Search results, Instagram and Facebook profiles, products from Amazon or Shopify, Google Maps reviews, etc. etc.\n",
"\n",
"In this example, we'll use the [Website Content Crawler](https://apify.com/apify/website-content-crawler) Actor,\n",
"which can deeply crawl websites such as documentation, knowledge bases, help centers, or blogs,\n",
"and extract text content from the web pages. Then we feed the documents into a vector index and answer questions from it.\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"First, import `ApifyWrapper` into your source code:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.base import Document\n",
"from langchain.indexes import VectorstoreIndexCreator\n",
"from langchain.utilities import ApifyWrapper"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Initialize it using your [Apify API token](https://console.apify.com/account/integrations) and for the purpose of this example, also with your OpenAI API key:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.environ[\"OPENAI_API_KEY\"] = \"Your OpenAI API key\"\n",
"os.environ[\"APIFY_API_TOKEN\"] = \"Your Apify API token\"\n",
"\n",
"apify = ApifyWrapper()"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Then run the Actor, wait for it to finish, and fetch its results from the Apify dataset into a LangChain document loader.\n",
"\n",
"Note that if you already have some results in an Apify dataset, you can load them directly using `ApifyDatasetLoader`, as shown in [this notebook](../../../indexes/document_loaders/examples/apify_dataset.ipynb). In that notebook, you'll also find the explanation of the `dataset_mapping_function`, which is used to map fields from the Apify dataset records to LangChain `Document` fields."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"loader = apify.call_actor(\n",
" actor_id=\"apify/website-content-crawler\",\n",
" run_input={\"startUrls\": [{\"url\": \"https://python.langchain.com/en/latest/\"}]},\n",
" dataset_mapping_function=lambda item: Document(\n",
" page_content=item[\"text\"] or \"\", metadata={\"source\": item[\"url\"]}\n",
" ),\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Initialize the vector index from the crawled documents:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"index = VectorstoreIndexCreator().from_loaders([loader])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"And finally, query the vector index:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"query = \"What is LangChain?\"\n",
"result = index.query_with_sources(query)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities.\n",
"\n",
"https://python.langchain.com/en/latest/modules/models/llms.html, https://python.langchain.com/en/latest/getting_started/getting_started.html\n"
]
}
],
"source": [
"print(result[\"answer\"])\n",
"print(result[\"sources\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,120 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3f34700b",
"metadata": {},
"source": [
"# ChatGPT Plugins\n",
"\n",
"This example shows how to use ChatGPT Plugins within LangChain abstractions.\n",
"\n",
"Note 1: This currently only works for plugins with no auth.\n",
"\n",
"Note 2: There are almost certainly other ways to do this, this is just a first pass. If you have better ideas, please open a PR!"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d41405b5",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.agents import load_tools, initialize_agent\n",
"from langchain.tools import AIPluginTool"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d9e61df5",
"metadata": {},
"outputs": [],
"source": [
"tool = AIPluginTool.from_plugin_url(\"https://www.klarna.com/.well-known/ai-plugin.json\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "edc0ea0e",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to check the Klarna Shopping API to see if it has information on available t shirts.\n",
"Action: KlarnaProducts\n",
"Action Input: None\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mUsage Guide: Use the Klarna plugin to get relevant product suggestions for any shopping or researching purpose. The query to be sent should not include stopwords like articles, prepositions and determinants. The api works best when searching for words that are related to products, like their name, brand, model or category. Links will always be returned and should be shown to the user.\n",
"\n",
"OpenAPI Spec: {'openapi': '3.0.1', 'info': {'version': 'v0', 'title': 'Open AI Klarna product Api'}, 'servers': [{'url': 'https://www.klarna.com/us/shopping'}], 'tags': [{'name': 'open-ai-product-endpoint', 'description': 'Open AI Product Endpoint. Query for products.'}], 'paths': {'/public/openai/v0/products': {'get': {'tags': ['open-ai-product-endpoint'], 'summary': 'API for fetching Klarna product information', 'operationId': 'productsUsingGET', 'parameters': [{'name': 'q', 'in': 'query', 'description': 'query, must be between 2 and 100 characters', 'required': True, 'schema': {'type': 'string'}}, {'name': 'size', 'in': 'query', 'description': 'number of products returned', 'required': False, 'schema': {'type': 'integer'}}, {'name': 'budget', 'in': 'query', 'description': 'maximum price of the matching product in local currency, filters results', 'required': False, 'schema': {'type': 'integer'}}], 'responses': {'200': {'description': 'Products found', 'content': {'application/json': {'schema': {'$ref': '#/components/schemas/ProductResponse'}}}}, '503': {'description': 'one or more services are unavailable'}}, 'deprecated': False}}}, 'components': {'schemas': {'Product': {'type': 'object', 'properties': {'attributes': {'type': 'array', 'items': {'type': 'string'}}, 'name': {'type': 'string'}, 'price': {'type': 'string'}, 'url': {'type': 'string'}}, 'title': 'Product'}, 'ProductResponse': {'type': 'object', 'properties': {'products': {'type': 'array', 'items': {'$ref': '#/components/schemas/Product'}}}, 'title': 'ProductResponse'}}}}\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to use the Klarna Shopping API to search for t shirts.\n",
"Action: requests_get\n",
"Action Input: https://www.klarna.com/us/shopping/public/openai/v0/products?q=t%20shirts\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m{\"products\":[{\"name\":\"Lacoste Men's Pack of Plain T-Shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202043025/Clothing/Lacoste-Men-s-Pack-of-Plain-T-Shirts/?utm_source=openai\",\"price\":\"$26.60\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Black\"]},{\"name\":\"Hanes Men's Ultimate 6pk. Crewneck T-Shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201808270/Clothing/Hanes-Men-s-Ultimate-6pk.-Crewneck-T-Shirts/?utm_source=openai\",\"price\":\"$13.82\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White\"]},{\"name\":\"Nike Boy's Jordan Stretch T-shirts\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3201863202/Children-s-Clothing/Nike-Boy-s-Jordan-Stretch-T-shirts/?utm_source=openai\",\"price\":\"$14.99\",\"attributes\":[\"Material:Cotton\",\"Color:White,Green\",\"Model:Boy\",\"Size (Small-Large):S,XL,L,M\"]},{\"name\":\"Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203028500/Clothing/Polo-Classic-Fit-Cotton-V-Neck-T-Shirts-3-Pack/?utm_source=openai\",\"price\":\"$29.95\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Blue,Black\"]},{\"name\":\"adidas Comfort T-shirts Men's 3-pack\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202640533/Clothing/adidas-Comfort-T-shirts-Men-s-3-pack/?utm_source=openai\",\"price\":\"$14.99\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:White,Black\",\"Neckline:Round\"]}]}\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe available t shirts in Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\n",
"Final Answer: The available t shirts in Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"The available t shirts in Klarna are Lacoste Men's Pack of Plain T-Shirts, Hanes Men's Ultimate 6pk. Crewneck T-Shirts, Nike Boy's Jordan Stretch T-shirts, Polo Classic Fit Cotton V-Neck T-Shirts 3-Pack, and adidas Comfort T-shirts Men's 3-pack.\""
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = ChatOpenAI(temperature=0,)\n",
"tools = load_tools([\"requests\"] )\n",
"tools += [tool]\n",
"\n",
"agent_chain = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", verbose=True)\n",
"agent_chain.run(\"what t shirts are available in klarna?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e49318a4",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,132 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Human as a tool\n",
"\n",
"Human are AGI so they can certainly be used as a tool to help out AI agent \n",
"when it is confused."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI\n",
"from langchain.agents import load_tools, initialize_agent\n",
"\n",
"llm = ChatOpenAI(temperature=0.0)\n",
"math_llm = OpenAI(temperature=0.0)\n",
"tools = load_tools(\n",
" [\"human\", \"llm-math\"], \n",
" llm=math_llm,\n",
")\n",
"\n",
"agent_chain = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=\"zero-shot-react-description\",\n",
" verbose=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the above code you can see the tool takes input directly from command line.\n",
"You can customize `prompt_func` and `input_func` according to your need."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI don't know Eric Zhu, so I should ask a human for guidance.\n",
"Action: Human\n",
"Action Input: \"Do you know when Eric Zhu's birthday is?\"\u001b[0m\n",
"\n",
"Do you know when Eric Zhu's birthday is?\n",
"last week\n",
"\n",
"Observation: \u001b[36;1m\u001b[1;3mlast week\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThat's not very helpful. I should ask for more information.\n",
"Action: Human\n",
"Action Input: \"Do you know the specific date of Eric Zhu's birthday?\"\u001b[0m\n",
"\n",
"Do you know the specific date of Eric Zhu's birthday?\n",
"august 1st\n",
"\n",
"Observation: \u001b[36;1m\u001b[1;3maugust 1st\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mNow that I have the date, I can check if it's a leap year or not.\n",
"Action: Calculator\n",
"Action Input: \"Is 2021 a leap year?\"\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3mAnswer: False\n",
"\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI have all the information I need to answer the original question.\n",
"Final Answer: Eric Zhu's birthday is on August 1st and it is not a leap year in 2021.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Eric Zhu's birthday is on August 1st and it is not a leap year in 2021.\""
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"\n",
"agent_chain.run(\"What is Eric Zhu's birthday?\")\n",
"# Answer with \"last week\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,128 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "245a954a",
"metadata": {},
"source": [
"# OpenWeatherMap API\n",
"\n",
"This notebook goes over how to use the OpenWeatherMap component to fetch weather information.\n",
"\n",
"First, you need to sign up for an OpenWeatherMap API key:\n",
"\n",
"1. Go to OpenWeatherMap and sign up for an API key [here](https://openweathermap.org/api/)\n",
"2. pip install pyowm\n",
"\n",
"Then we will need to set some environment variables:\n",
"1. Save your API KEY into OPENWEATHERMAP_API_KEY env variable"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "961b3689",
"metadata": {
"vscode": {
"languageId": "shellscript"
}
},
"outputs": [],
"source": [
"pip install pyowm"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "34bb5968",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"os.environ[\"OPENWEATHERMAP_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "ac4910f8",
"metadata": {},
"outputs": [],
"source": [
"from langchain.utilities import OpenWeatherMapAPIWrapper"
]
},
{
"cell_type": "code",
"execution_count": 37,
"id": "84b8f773",
"metadata": {},
"outputs": [],
"source": [
"weather = OpenWeatherMapAPIWrapper()"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "9651f324-e74a-4f08-a28a-89db029f66f8",
"metadata": {},
"outputs": [],
"source": [
"weather_data = weather.run(\"London,GB\")"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "028f4cba",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In London,GB, the current weather is as follows:\n",
"Detailed status: overcast clouds\n",
"Wind speed: 4.63 m/s, direction: 150°\n",
"Humidity: 67%\n",
"Temperature: \n",
" - Current: 5.35°C\n",
" - High: 6.26°C\n",
" - Low: 3.49°C\n",
" - Feels like: 1.95°C\n",
"Rain: {}\n",
"Heat index: None\n",
"Cloud cover: 100%\n"
]
}
],
"source": [
"print(weather_data)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,326 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "16763ed3",
"metadata": {},
"source": [
"# Zapier Natural Language Actions API\n",
"\\\n",
"Full docs here: https://nla.zapier.com/api/v1/docs\n",
"\n",
"**Zapier Natural Language Actions** gives you access to the 5k+ apps, 20k+ actions on Zapier's platform through a natural language API interface.\n",
"\n",
"NLA supports apps like Gmail, Salesforce, Trello, Slack, Asana, HubSpot, Google Sheets, Microsoft Teams, and thousands more apps: https://zapier.com/apps\n",
"\n",
"Zapier NLA handles ALL the underlying API auth and translation from natural language --> underlying API call --> return simplified output for LLMs. The key idea is you, or your users, expose a set of actions via an oauth-like setup window, which you can then query and execute via a REST API.\n",
"\n",
"NLA offers both API Key and OAuth for signing NLA API requests.\n",
"\n",
"1. Server-side (API Key): for quickly getting started, testing, and production scenarios where LangChain will only use actions exposed in the developer's Zapier account (and will use the developer's connected accounts on Zapier.com)\n",
"\n",
"2. User-facing (Oauth): for production scenarios where you are deploying an end-user facing application and LangChain needs access to end-user's exposed actions and connected accounts on Zapier.com\n",
"\n",
"This quick start will focus on the server-side use case for brevity. Review [full docs](https://nla.zapier.com/api/v1/docs) or reach out to nla@zapier.com for user-facing oauth developer support.\n",
"\n",
"This example goes over how to use the Zapier integration with a `SimpleSequentialChain`, then an `Agent`.\n",
"In code, below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a363309c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5cf33377",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"# get from https://platform.openai.com/\n",
"os.environ[\"OPENAI_API_KEY\"] = os.environ.get(\"OPENAI_API_KEY\", \"\")\n",
"\n",
"# get from https://nla.zapier.com/demo/provider/debug (under User Information, after logging in): \n",
"os.environ[\"ZAPIER_NLA_API_KEY\"] = os.environ.get(\"ZAPIER_NLA_API_KEY\", \"\")"
]
},
{
"cell_type": "markdown",
"id": "4881b484-1b97-478f-b206-aec407ceff66",
"metadata": {},
"source": [
"## Example with Agent\n",
"Zapier tools can be used with an agent. See the example below."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b2044b17-c941-4ffb-8a03-027a35e2df81",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.agents import initialize_agent\n",
"from langchain.agents.agent_toolkits import ZapierToolkit\n",
"from langchain.utilities.zapier import ZapierNLAWrapper"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7b505eeb",
"metadata": {},
"outputs": [],
"source": [
"## step 0. expose gmail 'find email' and slack 'send channel message' actions\n",
"\n",
"# first go here, log in, expose (enable) the two actions: https://nla.zapier.com/demo/start -- for this example, can leave all fields \"Have AI guess\"\n",
"# in an oauth scenario, you'd get your own <provider> id (instead of 'demo') which you route your users through first"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cab18227-c232-4214-9256-bb8dd352266c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"llm = OpenAI(temperature=0)\n",
"zapier = ZapierNLAWrapper()\n",
"toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)\n",
"agent = initialize_agent(toolkit.get_tools(), llm, agent=\"zero-shot-react-description\", verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f94713de-b64d-465f-a087-00288b5f80ec",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to find the email and summarize it.\n",
"Action: Gmail: Find Email\n",
"Action Input: Find the latest email from Silicon Valley Bank\u001b[0m\n",
"Observation: \u001b[31;1m\u001b[1;3m{\"from__name\": \"Silicon Valley Bridge Bank, N.A.\", \"from__email\": \"sreply@svb.com\", \"body_plain\": \"Dear Clients, After chaotic, tumultuous & stressful days, we have clarity on path for SVB, FDIC is fully insuring all deposits & have an ask for clients & partners as we rebuild. Tim Mayopoulos <https://eml.svb.com/NjEwLUtBSy0yNjYAAAGKgoxUeBCLAyF_NxON97X4rKEaNBLG\", \"reply_to__email\": \"sreply@svb.com\", \"subject\": \"Meet the new CEO Tim Mayopoulos\", \"date\": \"Tue, 14 Mar 2023 23:42:29 -0500 (CDT)\", \"message_url\": \"https://mail.google.com/mail/u/0/#inbox/186e393b13cfdf0a\", \"attachment_count\": \"0\", \"to__emails\": \"ankush@langchain.dev\", \"message_id\": \"186e393b13cfdf0a\", \"labels\": \"IMPORTANT, CATEGORY_UPDATES, INBOX\"}\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I need to summarize the email and send it to the #test-zapier channel in Slack.\n",
"Action: Slack: Send Channel Message\n",
"Action Input: Send a slack message to the #test-zapier channel with the text \"Silicon Valley Bank has announced that Tim Mayopoulos is the new CEO. FDIC is fully insuring all deposits and they have an ask for clients and partners as they rebuild.\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m{\"message__text\": \"Silicon Valley Bank has announced that Tim Mayopoulos is the new CEO. FDIC is fully insuring all deposits and they have an ask for clients and partners as they rebuild.\", \"message__permalink\": \"https://langchain.slack.com/archives/C04TSGU0RA7/p1678859932375259\", \"channel\": \"C04TSGU0RA7\", \"message__bot_profile__name\": \"Zapier\", \"message__team\": \"T04F8K3FZB5\", \"message__bot_id\": \"B04TRV4R74K\", \"message__bot_profile__deleted\": \"false\", \"message__bot_profile__app_id\": \"A024R9PQM\", \"ts_time\": \"2023-03-15T05:58:52Z\", \"message__bot_profile__icons__image_36\": \"https://avatars.slack-edge.com/2022-08-02/3888649620612_f864dc1bb794cf7d82b0_36.png\", \"message__blocks[]block_id\": \"kdZZ\", \"message__blocks[]elements[]type\": \"['rich_text_section']\"}\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: I have sent a summary of the last email from Silicon Valley Bank to the #test-zapier channel in Slack.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'I have sent a summary of the last email from Silicon Valley Bank to the #test-zapier channel in Slack.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Summarize the last email I received regarding Silicon Valley Bank. Send the summary to the #test-zapier channel in slack.\")"
]
},
{
"cell_type": "markdown",
"id": "bcdea831",
"metadata": {},
"source": [
"# Example with SimpleSequentialChain\n",
"If you need more explicit control, use a chain, like below."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "10a46e7e",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.chains import LLMChain, TransformChain, SimpleSequentialChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.tools.zapier.tool import ZapierNLARunAction\n",
"from langchain.utilities.zapier import ZapierNLAWrapper"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "b9358048",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"## step 0. expose gmail 'find email' and slack 'send direct message' actions\n",
"\n",
"# first go here, log in, expose (enable) the two actions: https://nla.zapier.com/demo/start -- for this example, can leave all fields \"Have AI guess\"\n",
"# in an oauth scenario, you'd get your own <provider> id (instead of 'demo') which you route your users through first\n",
"\n",
"actions = ZapierNLAWrapper().list()"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "4e80f461",
"metadata": {},
"outputs": [],
"source": [
"## step 1. gmail find email\n",
"\n",
"GMAIL_SEARCH_INSTRUCTIONS = \"Grab the latest email from Silicon Valley Bank\"\n",
"\n",
"def nla_gmail(inputs):\n",
" action = next((a for a in actions if a[\"description\"].startswith(\"Gmail: Find Email\")), None)\n",
" return {\"email_data\": ZapierNLARunAction(action_id=action[\"id\"], zapier_description=action[\"description\"], params_schema=action[\"params\"]).run(inputs[\"instructions\"])}\n",
"gmail_chain = TransformChain(input_variables=[\"instructions\"], output_variables=[\"email_data\"], transform=nla_gmail)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "46893233",
"metadata": {},
"outputs": [],
"source": [
"## step 2. generate draft reply\n",
"\n",
"template = \"\"\"You are an assisstant who drafts replies to an incoming email. Output draft reply in plain text (not JSON).\n",
"\n",
"Incoming email:\n",
"{email_data}\n",
"\n",
"Draft email reply:\"\"\"\n",
"\n",
"prompt_template = PromptTemplate(input_variables=[\"email_data\"], template=template)\n",
"reply_chain = LLMChain(llm=OpenAI(temperature=.7), prompt=prompt_template)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "cd85c4f8",
"metadata": {},
"outputs": [],
"source": [
"## step 3. send draft reply via a slack direct message\n",
"\n",
"SLACK_HANDLE = \"@Ankush Gola\"\n",
"\n",
"def nla_slack(inputs):\n",
" action = next((a for a in actions if a[\"description\"].startswith(\"Slack: Send Direct Message\")), None)\n",
" instructions = f'Send this to {SLACK_HANDLE} in Slack: {inputs[\"draft_reply\"]}'\n",
" return {\"slack_data\": ZapierNLARunAction(action_id=action[\"id\"], zapier_description=action[\"description\"], params_schema=action[\"params\"]).run(instructions)}\n",
"slack_chain = TransformChain(input_variables=[\"draft_reply\"], output_variables=[\"slack_data\"], transform=nla_slack)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "4829cab4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new SimpleSequentialChain chain...\u001b[0m\n",
"\u001b[36;1m\u001b[1;3m{\"from__name\": \"Silicon Valley Bridge Bank, N.A.\", \"from__email\": \"sreply@svb.com\", \"body_plain\": \"Dear Clients, After chaotic, tumultuous & stressful days, we have clarity on path for SVB, FDIC is fully insuring all deposits & have an ask for clients & partners as we rebuild. Tim Mayopoulos <https://eml.svb.com/NjEwLUtBSy0yNjYAAAGKgoxUeBCLAyF_NxON97X4rKEaNBLG\", \"reply_to__email\": \"sreply@svb.com\", \"subject\": \"Meet the new CEO Tim Mayopoulos\", \"date\": \"Tue, 14 Mar 2023 23:42:29 -0500 (CDT)\", \"message_url\": \"https://mail.google.com/mail/u/0/#inbox/186e393b13cfdf0a\", \"attachment_count\": \"0\", \"to__emails\": \"ankush@langchain.dev\", \"message_id\": \"186e393b13cfdf0a\", \"labels\": \"IMPORTANT, CATEGORY_UPDATES, INBOX\"}\u001b[0m\n",
"\u001b[33;1m\u001b[1;3m\n",
"Dear Silicon Valley Bridge Bank, \n",
"\n",
"Thank you for your email and the update regarding your new CEO Tim Mayopoulos. We appreciate your dedication to keeping your clients and partners informed and we look forward to continuing our relationship with you. \n",
"\n",
"Best regards, \n",
"[Your Name]\u001b[0m\n",
"\u001b[38;5;200m\u001b[1;3m{\"message__text\": \"Dear Silicon Valley Bridge Bank, \\n\\nThank you for your email and the update regarding your new CEO Tim Mayopoulos. We appreciate your dedication to keeping your clients and partners informed and we look forward to continuing our relationship with you. \\n\\nBest regards, \\n[Your Name]\", \"message__permalink\": \"https://langchain.slack.com/archives/D04TKF5BBHU/p1678859968241629\", \"channel\": \"D04TKF5BBHU\", \"message__bot_profile__name\": \"Zapier\", \"message__team\": \"T04F8K3FZB5\", \"message__bot_id\": \"B04TRV4R74K\", \"message__bot_profile__deleted\": \"false\", \"message__bot_profile__app_id\": \"A024R9PQM\", \"ts_time\": \"2023-03-15T05:59:28Z\", \"message__blocks[]block_id\": \"p7i\", \"message__blocks[]elements[]elements[]type\": \"[['text']]\", \"message__blocks[]elements[]type\": \"['rich_text_section']\"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'{\"message__text\": \"Dear Silicon Valley Bridge Bank, \\\\n\\\\nThank you for your email and the update regarding your new CEO Tim Mayopoulos. We appreciate your dedication to keeping your clients and partners informed and we look forward to continuing our relationship with you. \\\\n\\\\nBest regards, \\\\n[Your Name]\", \"message__permalink\": \"https://langchain.slack.com/archives/D04TKF5BBHU/p1678859968241629\", \"channel\": \"D04TKF5BBHU\", \"message__bot_profile__name\": \"Zapier\", \"message__team\": \"T04F8K3FZB5\", \"message__bot_id\": \"B04TRV4R74K\", \"message__bot_profile__deleted\": \"false\", \"message__bot_profile__app_id\": \"A024R9PQM\", \"ts_time\": \"2023-03-15T05:59:28Z\", \"message__blocks[]block_id\": \"p7i\", \"message__blocks[]elements[]elements[]type\": \"[[\\'text\\']]\", \"message__blocks[]elements[]type\": \"[\\'rich_text_section\\']\"}'"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"## finally, execute\n",
"\n",
"overall_chain = SimpleSequentialChain(chains=[gmail_chain, reply_chain, slack_chain], verbose=True)\n",
"overall_chain.run(GMAIL_SEARCH_INSTRUCTIONS)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "09ff954e-45f2-4595-92ea-91627abde4a0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,10 +1,6 @@
Chains
==========================
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/chains>`_
Using an LLM in isolation is fine for some simple applications,
but many more complex ones require chaining LLMs - either with each other or with other experts.
LangChain provides a standard interface for Chains, as well as some common implementations of chains for ease of use.
@@ -13,6 +9,8 @@ The following sections of documentation are provided:
- `Getting Started <./chains/getting_started.html>`_: A getting started guide for chains, to get you up and running quickly.
- `Key Concepts <./chains/key_concepts.html>`_: A conceptual guide going over the various concepts related to chains.
- `How-To Guides <./chains/how_to_guides.html>`_: A collection of how-to guides. These highlight how to use various types of chains.
- `Reference <../reference/modules/chains.html>`_: API reference documentation for all Chain classes.
@@ -27,4 +25,5 @@ The following sections of documentation are provided:
./chains/getting_started.ipynb
./chains/how_to_guides.rst
./chains/key_concepts.rst
Reference<../reference/modules/chains.rst>

View File

@@ -149,33 +149,6 @@
"chain.run(\"Search for 'Avatar'\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Listen API Example"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from langchain.llms import OpenAI\n",
"from langchain.chains.api import podcast_docs\n",
"from langchain.chains import APIChain\n",
"\n",
"# Get api key here: https://www.listennotes.com/api/pricing/\n",
"listen_api_key = 'xxx'\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"headers = {\"X-ListenAPI-Key\": listen_api_key}\n",
"chain = APIChain.from_llm_and_api_docs(llm, podcast_docs.PODCAST_DOCS, headers=headers, verbose=True)\n",
"chain.run(\"Search for 'silicon valley bank' podcast episodes, audio length is more than 30 minutes, return only 1 results\")\n"
]
},
{
"cell_type": "code",
"execution_count": null,
@@ -200,7 +173,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.9"
}
},
"nbformat": 4,

View File

@@ -5,7 +5,7 @@
"metadata": {},
"source": [
"# BashChain\n",
"This notebook showcases using LLMs and a bash process to perform simple filesystem commands."
"This notebook showcases using LLMs and a bash process to do perform simple filesystem commands."
]
},
{

View File

@@ -532,7 +532,7 @@
"id": "5fc6f507",
"metadata": {},
"source": [
"Note how our custom table definition and sample rows for `Track` overrides the `sample_rows_in_table_info` parameter. Tables that are not overridden by `custom_table_info`, in this example `Playlist`, will have their table info gathered automatically as usual."
"Note how our custom table definition and sample rows for `Track` overrides the `sample_rows_in_table_info` parameter. Tables that are not overriden by `custom_table_info`, in this example `Playlist`, will have their table info gathered automatically as usual."
]
},
{

View File

@@ -34,10 +34,10 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"whats 2 raised to .12\u001b[32;1m\u001b[1;3m\n",
"Answer: 1.0791812460476249\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"\u001B[1m> Entering new LLMMathChain chain...\u001B[0m\n",
"whats 2 raised to .12\u001B[32;1m\u001B[1;3m\n",
"Answer: 1.0791812460476249\u001B[0m\n",
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{

View File

@@ -31,7 +31,7 @@
"metadata": {},
"outputs": [],
"source": [
"with open(\"../../state_of_the_union.txt\") as f:\n",
"with open('../../state_of_the_union.txt') as f:\n",
" state_of_the_union = f.read()"
]
},
@@ -122,7 +122,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.9"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,33 @@
Generic Chains
--------------
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains.
The examples here are all generic end-to-end chains that are meant to be used to construct other chains rather than serving a specific purpose.
**LLMChain**
- **Links Used**: PromptTemplate, LLM
- **Notes**: This chain is the simplest chain, and is widely used by almost every other chain. This chain takes arbitrary user input, creates a prompt with it from the PromptTemplate, passes that to the LLM, and then returns the output of the LLM as the final output.
- `Example Notebook <./generic/llm_chain.html>`_
**Transformation Chain**
- **Links Used**: TransformationChain
- **Notes**: This notebook shows how to use the Transformation Chain, which takes an arbitrary python function and applies it to inputs/outputs of other chains.
- `Example Notebook <./generic/transformation.html>`_
**Sequential Chain**
- **Links Used**: Sequential
- **Notes**: This notebook shows how to combine calling multiple other chains in sequence.
- `Example Notebook <./generic/sequential_chains.html>`_
.. toctree::
:maxdepth: 1
:glob:
:caption: Generic Chains
:name: generic
:hidden:
./generic/*

View File

@@ -2,37 +2,23 @@ How-To Guides
=============
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either `prompts <../prompts.html>`_, `models <../models.html>`_, arbitrary functions, or other chains.
The examples here are broken up into three sections:
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains.
The examples here are all end-to-end chains for specific applications.
They are broken up into three categories:
**Generic Functionality**
Covers both generic chains (that are useful in a wide variety of applications) as well as generic functionality related to those chains.
1. `Generic Chains <./generic_how_to.html>`_: Generic chains, that are meant to help build other chains rather than serve a particular purpose.
2. `Utility Chains <./utility_how_to.html>`_: Chains consisting of an LLMChain interacting with a specific util.
3. `Asynchronous <./async_chain.html>`_: Covering asynchronous functionality.
.. toctree::
:maxdepth: 1
:glob:
:hidden:
./generic/*
./generic_how_to.rst
./utility_how_to.rst
./async_chain.ipynb
**Index-related Chains**
Chains related to working with indexes.
.. toctree::
:maxdepth: 1
:glob:
./index_examples/*
**All other chains**
All other types of chains!
.. toctree::
:maxdepth: 1
:glob:
./examples/*
In addition to different types of chains, we also have the following how-to guides for working with chains in general:
`Load From Hub <./generic/from_hub.html>`_: This notebook covers how to load chains from `LangChainHub <https://github.com/hwchase17/langchain-hub>`_.

View File

@@ -0,0 +1,20 @@
# Key Concepts
## Chains
A chain is made up of links, which can be either primitives or other chains.
They vary greatly in complexity and are combination of generic, highly configurable pipelines and more narrow (but usually more complex) pipelines.
## Sequential Chain
This is a specific type of chain where multiple other chains are run in sequence, with the outputs being added as inputs
to the next. A subtype of this type of chain is the [`SimpleSequentialChain`](./generic/sequential_chains.html#simplesequentialchain), where all subchains have only one input and one output,
and the output of one is therefore used as sole input to the next chain.
## Prompt Selectors
One thing that we've noticed is that the best prompt to use is really dependent on the model you use.
Some prompts work really good with some models, but not great with others.
One of our goals is provide good chains that "just work" out of the box.
A big part of chains like that is having prompts that "just work".
So rather than having a default prompt for chains, we are moving towards a paradigm where if a prompt is not explicitly
provided we select one with a PromptSelector. This class takes in the model passed in, and returns a default prompt.
The inner workings of the PromptSelector can look at any aspect of the model - LLM vs ChatModel, OpenAI vs Cohere, GPT3 vs GPT4, etc.
Due to this being a newer feature, this may not be implemented for all chains, but this is the direction we are moving.

View File

@@ -0,0 +1,65 @@
Utility Chains
--------------
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either `prompts <../prompts.html>`_, `llms <../llms.html>`_, `utils <../utils.html>`_, or other chains.
The examples here are all end-to-end chains for specific applications, focused on interacting an LLMChain with a specific utility.
**LLMMath**
- **Links Used**: Python REPL, LLMChain
- **Notes**: This chain takes user input (a math question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result.
- `Example Notebook <./examples/llm_math.html>`_
**PAL**
- **Links Used**: Python REPL, LLMChain
- **Notes**: This chain takes user input (a reasoning question), uses an LLMChain to convert it to python code snippet to run in the Python REPL, and then returns that as the result.
- `Paper <https://arxiv.org/abs/2211.10435>`_
- `Example Notebook <./examples/pal.html>`_
**SQLDatabase Chain**
- **Links Used**: SQLDatabase, LLMChain
- **Notes**: This chain takes user input (a question), uses a first LLM chain to construct a SQL query to run against the SQL database, and then uses another LLMChain to take the results of that query and use it to answer the original question.
- `Example Notebook <./examples/sqlite.html>`_
**API Chain**
- **Links Used**: LLMChain, Requests
- **Notes**: This chain first uses a LLM to construct the url to hit, then makes that request with the Requests wrapper, and finally runs that result through the language model again in order to product a natural language response.
- `Example Notebook <./examples/api.html>`_
**LLMBash Chain**
- **Links Used**: BashProcess, LLMChain
- **Notes**: This chain takes user input (a question), uses an LLM chain to convert it to a bash command to run in the terminal, and then returns that as the result.
- `Example Notebook <./examples/llm_bash.html>`_
**LLMChecker Chain**
- **Links Used**: LLMChain
- **Notes**: This chain takes user input (a question), uses an LLM chain to answer that question, and then uses other LLMChains to self-check that answer.
- `Example Notebook <./examples/llm_checker.html>`_
**LLMRequests Chain**
- **Links Used**: Requests, LLMChain
- **Notes**: This chain takes a URL and other inputs, uses Requests to get the data at that URL, and then passes that along with the other inputs into an LLMChain to generate a response. The example included shows how to ask a question to Google - it firsts constructs a Google url, then fetches the data there, then passes that data + the original question into an LLMChain to get an answer.
- `Example Notebook <./examples/llm_requests.html>`_
**Moderation Chain**
- **Links Used**: LLMChain, ModerationChain
- **Notes**: This chain shows how to use OpenAI's content moderation endpoint to screen output, and shows how to connect this to an LLMChain.
- `Example Notebook <./examples/moderation.html>`_
.. toctree::
:maxdepth: 1
:glob:
:caption: Generic Chains
:name: generic
:hidden:
./examples/*

View File

@@ -1,10 +1,6 @@
Chat Models
Chat
==========================
.. note::
`Conceptual Guide <https://docs.langchain.com/docs/components/models/chat-model>`_
Chat models are a variation on language models.
While chat models use language models under the hood, the interface they expose is a bit different.
Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
@@ -13,18 +9,18 @@ Chat model APIs are fairly new, so we are still figuring out the correct abstrac
The following sections of documentation are provided:
- `Getting Started <./chat/getting_started.html>`_: An overview of all the functionality the LangChain LLM class provides.
- `Getting Started <./chat/getting_started.html>`_: An overview of the basics of chat models.
- `How-To Guides <./chat/how_to_guides.html>`_: A collection of how-to guides. These highlight how to accomplish various objectives with our LLM class (streaming, async, etc).
- `Key Concepts <./chat/key_concepts.html>`_: A conceptual guide going over the various concepts related to chat models.
- `Integrations <./chat/integrations.html>`_: A collection of examples on how to integrate different LLM providers with LangChain (OpenAI, Hugging Face, etc).
- `How-To Guides <./chat/how_to_guides.html>`_: A collection of how-to guides. These highlight how to accomplish various objectives with our chat model class, as well as how to integrate with various chat model providers.
.. toctree::
:maxdepth: 1
:name: LLMs
:hidden:
./chat/getting_started.ipynb
./chat/key_concepts.md
./chat/how_to_guides.rst
./chat/integrations.rst

View File

@@ -0,0 +1,208 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e58f4d5a",
"metadata": {},
"source": [
"# Agent\n",
"This notebook covers how to create a custom agent for a chat model. It will utilize chat specific prompts."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5268c7fa",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import ZeroShotAgent, Tool, AgentExecutor\n",
"from langchain.chains import LLMChain\n",
"from langchain.utilities import SerpAPIWrapper"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fbaa4dbe",
"metadata": {},
"outputs": [],
"source": [
"search = SerpAPIWrapper()\n",
"tools = [\n",
" Tool(\n",
" name = \"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\"\n",
" )\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f3ba6f08",
"metadata": {},
"outputs": [],
"source": [
"prefix = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Args\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools, \n",
" prefix=prefix, \n",
" suffix=suffix, \n",
" input_variables=[]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3547a37d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
" AIMessagePromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a78f886f",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" SystemMessagePromptTemplate(prompt=prompt),\n",
" HumanMessagePromptTemplate.from_template(\"{input}\\n\\nThis was your previous work \"\n",
" f\"(but I haven't seen any of it! I only see what \"\n",
" \"you return as final answer):\\n{agent_scratchpad}\")\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "dadadd70",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(messages)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b7180182",
"metadata": {},
"outputs": [],
"source": [
"llm_chain = LLMChain(llm=ChatOpenAI(temperature=0), prompt=prompt)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "ddddb07b",
"metadata": {},
"outputs": [],
"source": [
"tool_names = [tool.name for tool in tools]\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "36aef054",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "33a4d6cc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mArrr, ye be in luck, matey! I'll find ye the answer to yer question.\n",
"\n",
"Thought: I need to search for the current population of Canada.\n",
"Action: Search\n",
"Action Input: \"current population of Canada 2023\"\n",
"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mThe current population of Canada is 38,623,091 as of Saturday, March 4, 2023, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mAhoy, me hearties! I've found the answer to yer question.\n",
"\n",
"Final Answer: As of March 4, 2023, the population of Canada be 38,623,091. Arrr!\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'As of March 4, 2023, the population of Canada be 38,623,091. Arrr!'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.run(\"How many people live in canada as of 2023?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6aefe978",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,376 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "134a0785",
"metadata": {},
"source": [
"# Chat Vector DB\n",
"\n",
"This notebook goes over how to set up a chat model to chat with a vector database.\n",
"\n",
"This notebook is very similar to the example of using an LLM in the ChatVectorDBChain. The only differences here are (1) using a ChatModel, and (2) passing in a ChatPromptTemplate (optimized for chat models)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "70c4e529",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.chains import ChatVectorDBChain"
]
},
{
"cell_type": "markdown",
"id": "cdff94be",
"metadata": {},
"source": [
"Load in documents. You can replace this with a loader for whatever type of data you want"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "01c46e92",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()"
]
},
{
"cell_type": "markdown",
"id": "e9be4779",
"metadata": {},
"source": [
"If you had multiple loaders that you wanted to combine, you do something like:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "433363a5",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# loaders = [....]\n",
"# docs = []\n",
"# for loader in loaders:\n",
"# docs.extend(loader.load())"
]
},
{
"cell_type": "markdown",
"id": "239475d2",
"metadata": {},
"source": [
"We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a8930cf7",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"documents = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"vectorstore = Chroma.from_documents(documents, embeddings)"
]
},
{
"cell_type": "markdown",
"id": "18415aca",
"metadata": {},
"source": [
"We are now going to construct a prompt specifically designed for chat models."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c8805230",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
" AIMessagePromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "cc86c30e",
"metadata": {},
"outputs": [],
"source": [
"system_template=\"\"\"Use the following pieces of context to answer the users question. \n",
"If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
"----------------\n",
"{context}\"\"\"\n",
"messages = [\n",
" SystemMessagePromptTemplate.from_template(system_template),\n",
" HumanMessagePromptTemplate.from_template(\"{question}\")\n",
"]\n",
"prompt = ChatPromptTemplate.from_messages(messages)"
]
},
{
"cell_type": "markdown",
"id": "3c96b118",
"metadata": {},
"source": [
"We now initialize the ChatVectorDBChain"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7b4110f3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"qa = ChatVectorDBChain.from_llm(ChatOpenAI(temperature=0), vectorstore,qa_prompt=prompt)"
]
},
{
"cell_type": "markdown",
"id": "3872432d",
"metadata": {},
"source": [
"Here's an example of asking a question with no chat history"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7fe3e730",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "bfff9cc8",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"\"The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and a consensus builder. He also mentioned that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.\""
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result[\"answer\"]"
]
},
{
"cell_type": "markdown",
"id": "9e46edf7",
"metadata": {},
"source": [
"Here's an example of asking a question with some chat history"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "00b4cf00",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who came before her\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "f01828d1",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"'The context does not provide information about the predecessor of Ketanji Brown Jackson.'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"result['answer']"
]
},
{
"cell_type": "markdown",
"id": "2324cdc6-98bf-4708-b8cd-02a98b1e5b67",
"metadata": {},
"source": [
"## Chat Vector DB with streaming to `stdout`\n",
"\n",
"Output from the chain will be streamed to `stdout` token by token in this example."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "2efacec3-2690-4b05-8de3-a32fd2ac3911",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chains.llm import LLMChain\n",
"from langchain.llms import OpenAI\n",
"from langchain.callbacks.base import CallbackManager\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"\n",
"# Construct a ChatVectorDBChain with a streaming llm for combine docs\n",
"# and a separate, non-streaming llm for question generation\n",
"llm = OpenAI(temperature=0)\n",
"streaming_llm = ChatOpenAI(streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]), verbose=True, temperature=0)\n",
"\n",
"question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)\n",
"doc_chain = load_qa_chain(streaming_llm, chain_type=\"stuff\", prompt=prompt)\n",
"\n",
"qa = ChatVectorDBChain(vectorstore=vectorstore, combine_docs_chain=doc_chain, question_generator=question_generator)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "fd6d43f4-7428-44a4-81bc-26fe88a98762",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The President nominated Circuit Court of Appeals Judge Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and a consensus builder. He also mentioned that she has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans."
]
}
],
"source": [
"chat_history = []\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "5ab38978-f3e8-4fa7-808c-c79dec48379a",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The context does not provide information on who Ketanji Brown Jackson succeeded on the United States Supreme Court."
]
}
],
"source": [
"chat_history = [(query, result[\"answer\"])]\n",
"query = \"Did he mention who she suceeded\"\n",
"result = qa({\"question\": query, \"chat_history\": chat_history})\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e8d0055",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -5,7 +5,7 @@
"id": "bb0735c0",
"metadata": {},
"source": [
"# How to use few shot examples\n",
"# Few Shot Examples\n",
"\n",
"This notebook covers how to use few shot examples in chat models.\n",
"\n",

View File

@@ -0,0 +1,192 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9a9350a6",
"metadata": {},
"source": [
"# Memory\n",
"This notebook goes over how to use Memory with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "110935ae",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import (\n",
" ChatPromptTemplate, \n",
" MessagesPlaceholder, \n",
" SystemMessagePromptTemplate, \n",
" HumanMessagePromptTemplate\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "161b6629",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages([\n",
" SystemMessagePromptTemplate.from_template(\"The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" HumanMessagePromptTemplate.from_template(\"{input}\")\n",
"])"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4976fbda",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import ConversationChain\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.memory import ConversationBufferMemory"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "12a0bea6",
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "f6edcd6a",
"metadata": {},
"source": [
"We can now initialize the memory. Note that we set `return_messages=True` To denote that this should return a list of messages when appropriate"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "f55bea38",
"metadata": {},
"outputs": [],
"source": [
"memory = ConversationBufferMemory(return_messages=True)"
]
},
{
"cell_type": "markdown",
"id": "737e8c78",
"metadata": {},
"source": [
"We can now use this in the rest of the chain."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "80152db7",
"metadata": {},
"outputs": [],
"source": [
"conversation = ConversationChain(memory=memory, prompt=prompt, llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ac68e766",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Hello! How can I assist you today?'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Hi there!\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "babb33d0",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"That sounds like fun! I'm happy to chat with you. Is there anything specific you'd like to talk about?\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"I'm doing well! Just having a conversation with an AI.\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "36f8a1dc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Sure! I am an AI language model created by OpenAI. I was trained on a large dataset of text from the internet, which allows me to understand and generate human-like language. I can answer questions, provide information, and even have conversations like this one. Is there anything else you'd like to know about me?\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conversation.predict(input=\"Tell me about yourself.\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "79fb460b",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -62,7 +62,7 @@
"metadata": {},
"source": [
"## Set the Environment API Key\n",
"You can create a PromptLayer API Key at [www.promptlayer.com](https://www.promptlayer.com) by clicking the settings cog in the navbar.\n",
"You can create a PromptLayer API Key at [wwww.promptlayer.com](https://ww.promptlayer.com) by clicking the settings cog in the navbar.\n",
"\n",
"Set it as an environment variable called `PROMPTLAYER_API_KEY`."
]
@@ -123,40 +123,6 @@
"id": "05e9e2fe",
"metadata": {},
"source": []
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c43803d1",
"metadata": {},
"source": [
"## Using PromptLayer Track\n",
"If you would like to use any of the [PromptLayer tracking features](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9), you need to pass the argument `return_pl_id` when instantializing the PromptLayer LLM to get the request id. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b7d4db01",
"metadata": {},
"outputs": [],
"source": [
"chat = PromptLayerChatOpenAI(return_pl_id=True)\n",
"chat_results = chat.generate([[HumanMessage(content=\"I am a cat and I want\")]])\n",
"\n",
"for res in chat_results.generations:\n",
" pl_request_id = res[0].generation_info[\"pl_request_id\"]\n",
" promptlayer.track.score(request_id=pl_request_id, score=100)"
]
},
{
"cell_type": "markdown",
"id": "13e56507",
"metadata": {},
"source": [
"Using this allows you to track the performance of your model in the PromptLayer dashboard. If you are using a prompt template, you can attach a template to a request as well.\n",
"Overall, this gives you the opportunity to track the performance of different templates and models in the PromptLayer dashboard."
]
}
],
"metadata": {
@@ -175,11 +141,11 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8 (default, Apr 13 2021, 12:59:45) \n[Clang 10.0.0 ]"
"version": "3.8.8"
},
"vscode": {
"interpreter": {
"hash": "8a5edab282632443219e051e4ade2d1d5bbc671c781051bf1437897cbdfea0f1"
"hash": "c4fe2cd85a8d9e8baaec5340ce66faff1c77581a9f43e6c45e85e09b6fced008"
}
}
},

View File

@@ -5,7 +5,7 @@
"id": "fe4e96b5",
"metadata": {},
"source": [
"# How to stream responses\n",
"# Streaming\n",
"\n",
"This notebook goes over how to use streaming with a chat model."
]

View File

@@ -0,0 +1,169 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "07c1e3b9",
"metadata": {},
"source": [
"# Vector DB Question/Answering\n",
"\n",
"This example showcases using a chat model to do question answering over a vector database.\n",
"\n",
"This notebook is very similar to the example of using an LLM in the ChatVectorDBChain. The only differences here are (1) using a ChatModel, and (2) passing in a ChatPromptTemplate (optimized for chat models)."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "82525493",
"metadata": {},
"outputs": [],
"source": [
"from langchain.embeddings.openai import OpenAIEmbeddings\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.chains import VectorDBQA"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5c7049db",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running Chroma using direct local API.\n",
"Using DuckDB in-memory for database. Data will be transient.\n"
]
}
],
"source": [
"from langchain.document_loaders import TextLoader\n",
"loader = TextLoader('../../state_of_the_union.txt')\n",
"documents = loader.load()\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(documents)\n",
"\n",
"embeddings = OpenAIEmbeddings()\n",
"docsearch = Chroma.from_documents(texts, embeddings)"
]
},
{
"cell_type": "markdown",
"id": "35f99145",
"metadata": {},
"source": [
"We can now set up the chat model and chat model specific prompt"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "32a49412",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
" AIMessagePromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f231fb9b",
"metadata": {},
"outputs": [],
"source": [
"system_template=\"\"\"Use the following pieces of context to answer the users question. \n",
"If you don't know the answer, just say that you don't know, don't try to make up an answer.\n",
"----------------\n",
"{context}\"\"\"\n",
"messages = [\n",
" SystemMessagePromptTemplate.from_template(system_template),\n",
" HumanMessagePromptTemplate.from_template(\"{question}\")\n",
"]\n",
"prompt = ChatPromptTemplate.from_messages(messages)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "3018f865",
"metadata": {},
"outputs": [],
"source": [
"chain_type_kwargs = {\"prompt\": prompt}\n",
"qa = VectorDBQA.from_chain_type(llm=ChatOpenAI(), chain_type=\"stuff\", vectorstore=docsearch, chain_type_kwargs=chain_type_kwargs)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "032a47f8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"The President nominated Ketanji Brown Jackson as a Judge for the United States Supreme Court. He described her as one of the nation's top legal minds and a former top litigator in private practice, a former federal public defender, and a consensus builder.\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"qa.run(query)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8b403637",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
},
"vscode": {
"interpreter": {
"hash": "b1677b440931f40d89ef8be7bf03acb108ce003de0ac9b18e8d43753ea2e7103"
}
}
},
"nbformat": 4,
"nbformat_minor": 5
}

Some files were not shown because too many files have changed in this diff Show More