Compare commits

..

1 Commits

Author SHA1 Message Date
Harrison Chase
483c4f7fcb hide param list 2023-06-19 23:28:32 -07:00
507 changed files with 5587 additions and 35232 deletions

View File

@@ -1,26 +1,56 @@
<!-- Thank you for contributing to LangChain!
<!--
Thank you for contributing to LangChain! Your PR will appear in our release under the title you set. Please make sure it highlights your valuable contribution.
Replace this comment with:
- Description: a description of the change,
- Issue: the issue # it fixes (if applicable),
- Dependencies: any dependencies required for this change,
- Tag maintainer: for a quicker response, tag the relevant maintainer (see below),
- Twitter handle: we announce bigger features on Twitter. If your PR gets announced and you'd like a mention, we'll gladly shout you out!
Replace this with a description of the change, the issue it fixes (if applicable), and relevant context. List any dependencies required for this change.
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use.
After you're done, someone will review your PR. They may suggest improvements. If no one reviews your PR within a few days, feel free to @-mention the same people again, as notifications can get lost.
Maintainer responsibilities:
- General / Misc / if you don't know who to tag: @dev2049
- DataLoaders / VectorStores / Retrievers: @rlancemartin, @eyurtsev
- Models / Prompts: @hwchase17, @dev2049
- Memory: @hwchase17
- Agents / Tools / Toolkits: @vowelparrot
- Tracing / Callbacks: @agola11
- Async: @agola11
Finally, we'd love to show appreciation for your contribution - if you'd like us to shout you out on Twitter, please also include your handle!
-->
If no one reviews your PR within a few days, feel free to @-mention the same people again.
<!-- Remove if not applicable -->
Fixes # (issue)
#### Before submitting
<!-- If you're adding a new integration, please include:
1. a test for the integration - favor unit tests that does not rely on network access.
2. an example notebook showing its use
See contribution guidelines for more information on how to write tests, lint
etc:
https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?
Tag maintainers/contributors who might be interested:
<!-- For a quicker response, figure out the right person to tag with @
@hwchase17 - project lead
Tracing / Callbacks
- @agola11
Async
- @agola11
DataLoaders
- @eyurtsev
Models
- @hwchase17
- @agola11
Agents / Tools / Toolkits
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049
See contribution guidelines for more information on how to write/run tests, lint, etc: https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

View File

@@ -9,9 +9,6 @@ build:
os: ubuntu-22.04
tools:
python: "3.11"
jobs:
pre_build:
- python docs/api_reference/create_api_rst.py
# Build documentation in the docs/ directory with Sphinx
sphinx:

View File

@@ -0,0 +1,57 @@
document.addEventListener('DOMContentLoaded', () => {
// Load the external dependencies
function loadScript(src, onLoadCallback) {
const script = document.createElement('script');
script.src = src;
script.onload = onLoadCallback;
document.head.appendChild(script);
}
function createRootElement() {
const rootElement = document.createElement('div');
rootElement.id = 'my-component-root';
document.body.appendChild(rootElement);
return rootElement;
}
function initializeMendable() {
const rootElement = createRootElement();
const { MendableFloatingButton } = Mendable;
const iconSpan1 = React.createElement('span', {
}, '🦜');
const iconSpan2 = React.createElement('span', {
}, '🔗');
const icon = React.createElement('p', {
style: { color: '#ffffff', fontSize: '22px',width: '48px', height: '48px', margin: '0px', padding: '0px', display: 'flex', alignItems: 'center', justifyContent: 'center', textAlign: 'center' },
}, [iconSpan1, iconSpan2]);
const mendableFloatingButton = React.createElement(
MendableFloatingButton,
{
style: { darkMode: false, accentColor: '#010810' },
floatingButtonStyle: { color: '#ffffff', backgroundColor: '#010810' },
anon_key: '82842b36-3ea6-49b2-9fb8-52cfc4bde6bf', // Mendable Search Public ANON key, ok to be public
cmdShortcutKey:'j',
messageSettings: {
openSourcesInNewTab: false,
prettySources: true // Prettify the sources displayed now
},
icon: icon,
}
);
ReactDOM.render(mendableFloatingButton, rootElement);
}
loadScript('https://unpkg.com/react@17/umd/react.production.min.js', () => {
loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js', () => {
loadScript('https://unpkg.com/@mendable/search@0.0.102/dist/umd/mendable.min.js', initializeMendable);
});
});
});

View File

@@ -0,0 +1,12 @@
Agents
==============
Reference guide for Agents and associated abstractions.
.. toctree::
:maxdepth: 1
:glob:
modules/agents
modules/tools
modules/agent_toolkits

File diff suppressed because it is too large Load Diff

View File

@@ -11,13 +11,12 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
import toml
sys.path.insert(0, os.path.abspath("."))
with open("../../pyproject.toml") as f:
data = toml.load(f)
@@ -46,9 +45,11 @@ extensions = [
"sphinx.ext.napoleon",
"sphinx.ext.viewcode",
"sphinxcontrib.autodoc_pydantic",
"myst_nb",
"sphinx_copybutton",
"sphinx_panels",
"IPython.sphinxext.ipython_console_highlighting",
"sphinx_tabs.tabs",
]
source_suffix = [".rst"]
@@ -58,22 +59,24 @@ autodoc_pydantic_config_members = False
autodoc_pydantic_model_show_config_summary = False
autodoc_pydantic_model_show_validator_members = False
autodoc_pydantic_model_show_validator_summary = False
autodoc_pydantic_model_show_field_summary = False
autodoc_pydantic_model_members = False
autodoc_pydantic_model_undoc_members = False
autodoc_pydantic_model_hide_paramlist = True
autodoc_pydantic_model_signature_prefix = "class"
autodoc_pydantic_field_signature_prefix = "param"
autodoc_member_order = "groupwise"
autoclass_content = "both"
autodoc_typehints_format = "short"
autodoc_pydantic_field_signature_prefix = "attribute"
autodoc_pydantic_model_summary_list_order = "bysource"
autodoc_member_order = "bysource"
autodoc_default_options = {
"members": True,
"show-inheritance": True,
"inherited-members": "BaseModel",
"undoc-members": True,
"special-members": "__call__",
"undoc_members": True,
"inherited_members": "BaseModel",
}
# autodoc_typehints = "description"
autodoc_typehints = "description"
# Add any paths that contain templates here, relative to this directory.
templates_path = ["templates"]
templates_path = ["_templates"]
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
@@ -86,16 +89,14 @@ exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "scikit-learn-modern"
html_theme_path = ["themes"]
html_theme = "sphinx_rtd_theme"
# redirects dictionary maps from old links to new links
html_additional_pages = {}
redirects = {
"index": "api_reference",
html_theme_options = {
"path_to_docs": "docs",
"repository_url": "https://github.com/hwchase17/langchain",
"use_repository_button": True,
# "style_nav_header_background": "white"
}
for old_link in redirects:
html_additional_pages[old_link] = "redirects.html"
html_context = {
"display_github": True, # Integrate GitHub
@@ -103,7 +104,6 @@ html_context = {
"github_repo": "langchain", # Repo name
"github_version": "master", # Version
"conf_py_path": "/docs/api_reference", # Path in the checkout to the docs root
"redirects": redirects,
}
# Add any paths that contain custom static files (such as style sheets) here,
@@ -116,9 +116,10 @@ html_static_path = ["_static"]
html_css_files = [
"css/custom.css",
]
html_use_index = False
html_js_files = [
"js/mendablesearch.js",
]
nb_execution_mode = "off"
myst_enable_extensions = ["colon_fence"]
# generate autosummary even if no references
autosummary_generate = True

View File

@@ -1,94 +0,0 @@
"""Script for auto-generating api_reference.rst"""
import glob
import re
from pathlib import Path
ROOT_DIR = Path(__file__).parents[2].absolute()
PKG_DIR = ROOT_DIR / "langchain"
WRITE_FILE = Path(__file__).parent / "api_reference.rst"
def load_members() -> dict:
members: dict = {}
for py in glob.glob(str(PKG_DIR) + "/**/*.py", recursive=True):
module = py[len(str(PKG_DIR)) + 1 :].replace(".py", "").replace("/", ".")
top_level = module.split(".")[0]
if top_level not in members:
members[top_level] = {"classes": [], "functions": []}
with open(py, "r") as f:
for line in f.readlines():
cls = re.findall(r"^class ([^_].*)\(", line)
members[top_level]["classes"].extend([module + "." + c for c in cls])
func = re.findall(r"^def ([^_].*)\(", line)
members[top_level]["functions"].extend([module + "." + f for f in func])
return members
def construct_doc(members: dict) -> str:
full_doc = """\
.. _api_reference:
=============
API Reference
=============
"""
for module, _members in sorted(members.items(), key=lambda kv: kv[0]):
classes = _members["classes"]
functions = _members["functions"]
if not (classes or functions):
continue
module_title = module.replace("_", " ").title()
if module_title == "Llms":
module_title = "LLMs"
section = f":mod:`langchain.{module}`: {module_title}"
full_doc += f"""\
{section}
{'=' * (len(section) + 1)}
.. automodule:: langchain.{module}
:no-members:
:no-inherited-members:
"""
if classes:
cstring = "\n ".join(sorted(classes))
full_doc += f"""\
Classes
--------------
.. currentmodule:: langchain
.. autosummary::
:toctree: {module}
:template: class.rst
{cstring}
"""
if functions:
fstring = "\n ".join(sorted(functions))
full_doc += f"""\
Functions
--------------
.. currentmodule:: langchain
.. autosummary::
:toctree: {module}
{fstring}
"""
return full_doc
def main() -> None:
members = load_members()
full_doc = construct_doc(members)
with open(WRITE_FILE, "w") as f:
f.write(full_doc)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,13 @@
Data connection
==============
LangChain has a number of modules that help you load, structure, store, and retrieve documents.
.. toctree::
:maxdepth: 1
:glob:
modules/document_loaders
modules/document_transformers
modules/embeddings
modules/vectorstores
modules/retrievers

View File

@@ -1,8 +1,29 @@
=============
LangChain API
=============
API Reference
==========================
| Full documentation on all methods, classes, and APIs in the LangChain Python package.
.. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: Abstractions
api_reference.rst
./modules/base_classes.rst
.. toctree::
:maxdepth: 1
:caption: Core
./model_io.rst
./data_connection.rst
./modules/chains.rst
./agents.rst
./modules/memory.rst
./modules/callbacks.rst
.. toctree::
:maxdepth: 1
:caption: Additional
./modules/utilities.rst
./modules/experimental.rst

View File

@@ -0,0 +1,12 @@
Model I/O
==============
LangChain provides interfaces and integrations for working with language models.
.. toctree::
:maxdepth: 1
:glob:
./prompts.rst
./models.rst
./modules/output_parsers.rst

View File

@@ -0,0 +1,11 @@
Models
==============
LangChain provides interfaces and integrations for a number of different types of models.
.. toctree::
:maxdepth: 1
:glob:
modules/llms
modules/chat_models

View File

@@ -0,0 +1,7 @@
Agent Toolkits
===============================
.. automodule:: langchain.agents.agent_toolkits
:members:
:undoc-members:

View File

@@ -0,0 +1,7 @@
Agents
===============================
.. automodule:: langchain.agents
:members:
:undoc-members:

View File

@@ -0,0 +1,5 @@
Base classes
========================
.. automodule:: langchain.schema
:inherited-members:

View File

@@ -0,0 +1,7 @@
Callbacks
=======================
.. automodule:: langchain.callbacks
:members:
:undoc-members:

View File

@@ -0,0 +1,8 @@
Chains
=======================
.. automodule:: langchain.chains
:members:
:undoc-members:
:inherited-members: BaseModel

View File

@@ -0,0 +1,7 @@
Chat Models
===============================
.. automodule:: langchain.chat_models
:members:
:undoc-members:

View File

@@ -0,0 +1,7 @@
Document Loaders
===============================
.. automodule:: langchain.document_loaders
:members:
:undoc-members:

View File

@@ -0,0 +1,13 @@
Document Transformers
===============================
.. automodule:: langchain.document_transformers
:members:
:undoc-members:
Text Splitters
------------------------------
.. automodule:: langchain.text_splitter
:members:
:undoc-members:

View File

@@ -0,0 +1,5 @@
Embeddings
===========================
.. automodule:: langchain.embeddings
:members:

View File

@@ -0,0 +1,5 @@
Example Selector
=========================================
.. automodule:: langchain.prompts.example_selector
:members:

View File

@@ -0,0 +1,28 @@
====================
Experimental
====================
This module contains experimental modules and reproductions of existing work using LangChain primitives.
Autonomous agents
------------------
Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.
.. autoclass:: langchain.experimental.BabyAGI
:members:
.. autoclass:: langchain.experimental.AutoGPT
:members:
Generative agents
------------------
Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.
.. autoclass:: langchain.experimental.GenerativeAgent
:members:
.. autoclass:: langchain.experimental.GenerativeAgentMemory
:members:

View File

@@ -0,0 +1,7 @@
LLMs
=======================
.. automodule:: langchain.llms
:members:
:inherited-members:
:special-members: __call__

View File

@@ -0,0 +1,7 @@
Memory
===============================
.. automodule:: langchain.memory
:members:
:undoc-members:

View File

@@ -0,0 +1,7 @@
Output Parsers
===============================
.. automodule:: langchain.output_parsers
:members:
:undoc-members:

View File

@@ -0,0 +1,6 @@
Prompt Templates
========================
.. automodule:: langchain.prompts
:members:
:undoc-members:

View File

@@ -0,0 +1,14 @@
Retrievers
===============================
.. automodule:: langchain.retrievers
:members:
:undoc-members:
Document compressors
-------------------------------
.. automodule:: langchain.retrievers.document_compressors
:members:
:undoc-members:

View File

@@ -0,0 +1,7 @@
Tools
===============================
.. automodule:: langchain.tools
:members:
:undoc-members:

View File

@@ -0,0 +1,7 @@
Utilities
===============================
.. automodule:: langchain.utilities
:members:
:undoc-members:

View File

@@ -0,0 +1,6 @@
Vector Stores
=============================
.. automodule:: langchain.vectorstores
:members:
:undoc-members:

View File

@@ -0,0 +1,11 @@
Prompts
==============
The reference guides here all relate to objects for working with Prompts.
.. toctree::
:maxdepth: 1
:glob:
modules/prompts
modules/example_selector

View File

@@ -1,27 +0,0 @@
Copyright (c) 2007-2023 The scikit-learn developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,28 +0,0 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autoclass:: {{ objname }}
{% block methods %}
{% if methods %}
.. rubric:: {{ _('Methods') }}
.. autosummary::
{% for item in methods %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}
{% block attributes %}
{% if attributes %}
.. rubric:: {{ _('Attributes') }}
.. autosummary::
{% for item in attributes %}
~{{ name }}.{{ item }}
{%- endfor %}
{% endif %}
{% endblock %}

View File

@@ -1,15 +0,0 @@
{% set redirect = pathto(redirects[pagename]) %}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="Refresh" content="0; url={{ redirect }}" />
<meta name="Description" content="scikit-learn: machine learning in Python">
<link rel="canonical" href="{{ redirect }}" />
<title>scikit-learn: machine learning in Python</title>
</head>
<body>
<p>You will be automatically redirected to the <a href="{{ redirect }}">new location of this page</a>.</p>
</body>
</html>

View File

@@ -1,27 +0,0 @@
Copyright (c) 2007-2023 The scikit-learn developers.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,67 +0,0 @@
<script>
$(document).ready(function() {
/* Add a [>>>] button on the top-right corner of code samples to hide
* the >>> and ... prompts and the output and thus make the code
* copyable. */
var div = $('.highlight-python .highlight,' +
'.highlight-python3 .highlight,' +
'.highlight-pycon .highlight,' +
'.highlight-default .highlight')
var pre = div.find('pre');
// get the styles from the current theme
pre.parent().parent().css('position', 'relative');
var hide_text = 'Hide prompts and outputs';
var show_text = 'Show prompts and outputs';
// create and add the button to all the code blocks that contain >>>
div.each(function(index) {
var jthis = $(this);
if (jthis.find('.gp').length > 0) {
var button = $('<span class="copybutton">&gt;&gt;&gt;</span>');
button.attr('title', hide_text);
button.data('hidden', 'false');
jthis.prepend(button);
}
// tracebacks (.gt) contain bare text elements that need to be
// wrapped in a span to work with .nextUntil() (see later)
jthis.find('pre:has(.gt)').contents().filter(function() {
return ((this.nodeType == 3) && (this.data.trim().length > 0));
}).wrap('<span>');
});
// define the behavior of the button when it's clicked
$('.copybutton').click(function(e){
e.preventDefault();
var button = $(this);
if (button.data('hidden') === 'false') {
// hide the code output
button.parent().find('.go, .gp, .gt').hide();
button.next('pre').find('.gt').nextUntil('.gp, .go').css('visibility', 'hidden');
button.css('text-decoration', 'line-through');
button.attr('title', show_text);
button.data('hidden', 'true');
} else {
// show the code output
button.parent().find('.go, .gp, .gt').show();
button.next('pre').find('.gt').nextUntil('.gp, .go').css('visibility', 'visible');
button.css('text-decoration', 'none');
button.attr('title', hide_text);
button.data('hidden', 'false');
}
});
/*** Add permalink buttons next to glossary terms ***/
$('dl.glossary > dt[id]').append(function() {
return ('<a class="headerlink" href="#' +
this.getAttribute('id') +
'" title="Permalink to this term">¶</a>');
});
});
</script>
{%- if pagename != 'index' and pagename != 'documentation' %}
{% if theme_mathjax_path %}
<script id="MathJax-script" async src="{{ theme_mathjax_path }}"></script>
{% endif %}
{%- endif %}

View File

@@ -1,142 +0,0 @@
{# TEMPLATE VAR SETTINGS #}
{%- set url_root = pathto('', 1) %}
{%- if url_root == '#' %}{% set url_root = '' %}{% endif %}
{%- if not embedded and docstitle %}
{%- set titlesuffix = " &mdash; "|safe + docstitle|e %}
{%- else %}
{%- set titlesuffix = "" %}
{%- endif %}
{%- set lang_attr = 'en' %}
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="{{ lang_attr }}" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="{{ lang_attr }}" > <!--<![endif]-->
<head>
<meta charset="utf-8">
{{ metatags }}
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{% block htmltitle %}
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
{% endblock %}
<link rel="canonical" href="http://scikit-learn.org/stable/{{pagename}}.html" />
{% if favicon_url %}
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>
{% endif %}
<link rel="stylesheet" href="{{ pathto('_static/css/vendor/bootstrap.min.css', 1) }}" type="text/css" />
{%- for css in css_files %}
{%- if css|attr("rel") %}
<link rel="{{ css.rel }}" href="{{ pathto(css.filename, 1) }}" type="text/css"{% if css.title is not none %} title="{{ css.title }}"{% endif %} />
{%- else %}
<link rel="stylesheet" href="{{ pathto(css, 1) }}" type="text/css" />
{%- endif %}
{%- endfor %}
<link rel="stylesheet" href="{{ pathto('_static/' + style, 1) }}" type="text/css" />
<script id="documentation_options" data-url_root="{{ pathto('', 1) }}" src="{{ pathto('_static/documentation_options.js', 1) }}"></script>
<script src="{{ pathto('_static/jquery.js', 1) }}"></script>
{%- block extrahead %} {% endblock %}
</head>
<body>
{% include "nav.html" %}
{%- block content %}
<div class="d-flex" id="sk-doc-wrapper">
<input type="checkbox" name="sk-toggle-checkbox" id="sk-toggle-checkbox">
<label id="sk-sidemenu-toggle" class="sk-btn-toggle-toc btn sk-btn-primary" for="sk-toggle-checkbox">Toggle Menu</label>
<div id="sk-sidebar-wrapper" class="border-right">
<div class="sk-sidebar-toc-wrapper">
<div class="btn-group w-100 mb-2" role="group" aria-label="rellinks">
{%- if prev %}
<a href="{{ prev.link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ prev.title|striptags }}">Prev</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink py-1 disabled"">Prev</a>
{%- endif %}
{%- if parents -%}
<a href="{{ parents[-1].link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ parents[-1].title|striptags }}">Up</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink disabled py-1">Up</a>
{%- endif %}
{%- if next %}
<a href="{{ next.link|e }}" role="button" class="btn sk-btn-rellink py-1" sk-rellink-tooltip="{{ next.title|striptags }}">Next</a>
{%- else %}
<a href="#" role="button" class="btn sk-btn-rellink py-1 disabled"">Next</a>
{%- endif %}
</div>
{%- if pagename != "install" %}
<div class="alert alert-warning p-1 mb-2" role="alert">
<p class="text-center mb-0">
<strong>LangChain {{ release }}</strong><br/>
</p>
</div>
{%- endif %}
{%- if meta and meta['parenttoc']|tobool %}
<div class="sk-sidebar-toc">
{% set nav = get_nav_object(maxdepth=3, collapse=True, numbered=True) %}
<ul>
{% for main_nav_item in nav %}
{% if main_nav_item.active %}
<li>
<a href="{{ main_nav_item.url }}" class="sk-toc-active">{{ main_nav_item.title }}</a>
</li>
<ul>
{% for nav_item in main_nav_item.children %}
<li>
<a href="{{ nav_item.url }}" class="{% if nav_item.active %}sk-toc-active{% endif %}">{{ nav_item.title }}</a>
{% if nav_item.children %}
<ul>
{% for inner_child in nav_item.children %}
<li class="sk-toctree-l3">
<a href="{{ inner_child.url }}">{{ inner_child.title }}</a>
</li>
{% endfor %}
</ul>
{% endif %}
</li>
{% endfor %}
</ul>
{% endif %}
{% endfor %}
</ul>
</div>
{%- elif meta and meta['globalsidebartoc']|tobool %}
<div class="sk-sidebar-toc sk-sidebar-global-toc">
{{ toctree(maxdepth=2, titles_only=True) }}
</div>
{%- else %}
<div class="sk-sidebar-toc">
{{ toc }}
</div>
{%- endif %}
</div>
</div>
<div id="sk-page-content-wrapper">
<div class="sk-page-content container-fluid body px-md-3" role="main">
{% block body %}{% endblock %}
</div>
<div class="container">
<footer class="sk-content-footer">
{%- if pagename != 'index' %}
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; {{ copyright }}.{% endtrans %}
{%- else %}
{% trans copyright=copyright|e %}&copy; {{ copyright }}.{% endtrans %}
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}
{%- endif %}
{%- if show_source and has_source and sourcename %}
<a href="{{ pathto('_sources/' + sourcename, true)|e }}" rel="nofollow">{{ _('Show this page source') }}</a>
{%- endif %}
{%- endif %}
</footer>
</div>
</div>
</div>
{%- endblock %}
<script src="{{ pathto('_static/js/vendor/bootstrap.min.js', 1) }}"></script>
{% include "javascript.html" %}
</body>
</html>

View File

@@ -1,85 +0,0 @@
{%- if pagename != 'index' and pagename != 'documentation' %}
{%- set nav_bar_class = "sk-docs-navbar" %}
{%- set top_container_cls = "sk-docs-container" %}
{%- else %}
{%- set nav_bar_class = "sk-landing-navbar" %}
{%- set top_container_cls = "sk-landing-container" %}
{%- endif %}
{% if theme_link_to_live_contributing_page|tobool %}
{# Link to development page for live builds #}
{%- set development_link = "https://scikit-learn.org/dev/developers/index.html" %}
{# Open on a new development page in new window/tab for live builds #}
{%- set development_attrs = 'target="_blank" rel="noopener noreferrer"' %}
{%- else %}
{%- set development_link = pathto('developers/index') %}
{%- set development_attrs = '' %}
{%- endif %}
{# title, link, link_attrs #}
{%- set drop_down_navigation = [
('Getting Started', pathto('getting_started'), ''),
('Tutorial', pathto('tutorial/index'), ''),
("What's new", pathto('whats_new/v' + version), ''),
('Glossary', pathto('glossary'), ''),
('Development', development_link, development_attrs),
('FAQ', pathto('faq'), ''),
('Support', pathto('support'), ''),
('Related packages', pathto('related_projects'), ''),
('Roadmap', pathto('roadmap'), ''),
('Governance', pathto('governance'), ''),
('About us', pathto('about'), ''),
('GitHub', 'https://github.com/scikit-learn/scikit-learn', ''),
('Other Versions and Download', 'https://scikit-learn.org/dev/versions.html', '')]
-%}
<nav id="navbar" class="{{ nav_bar_class }} navbar navbar-expand-md navbar-light bg-light py-0">
<div class="container-fluid {{ top_container_cls }} px-0">
{%- if logo_url %}
<a class="navbar-brand py-0" href="{{ pathto('index') }}">
<img
class="sk-brand-img"
src="{{ logo_url|e }}"
alt="logo"/>
</a>
{%- endif %}
<button
id="sk-navbar-toggler"
class="navbar-toggler"
type="button"
data-toggle="collapse"
data-target="#navbarSupportedContent"
aria-controls="navbarSupportedContent"
aria-expanded="false"
aria-label="Toggle navigation"
>
<span class="navbar-toggler-icon"></span>
</button>
<div class="sk-navbar-collapse collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav mr-auto">
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('api_reference') }}">API</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" target="_blank" rel="noopener noreferrer" href="https://python.langchain.com/">Python Docs</a>
</li>
{%- for title, link, link_attrs in drop_down_navigation %}
<li class="nav-item">
<a class="sk-nav-link nav-link nav-more-item-mobile-items" href="{{ link }}" {{ link_attrs }}>{{ title }}</a>
</li>
{%- endfor %}
</ul>
{%- if pagename != "search"%}
<div id="searchbox" role="search">
<div class="searchformwrapper">
<form class="search" action="{{ pathto('search') }}" method="get">
<input class="sk-search-text-input" type="text" name="q" aria-labelledby="searchlabel" />
<input class="sk-search-text-btn" type="submit" value="{{ _('Go') }}" />
</form>
</div>
</div>
{%- endif %}
</div>
</div>
</nav>

View File

@@ -1,16 +0,0 @@
{%- extends "basic/search.html" %}
{% block extrahead %}
<script type="text/javascript" src="{{ pathto('_static/underscore.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('searchindex.js', 1) }}" defer></script>
<script type="text/javascript" src="{{ pathto('_static/doctools.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/language_data.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/searchtools.js', 1) }}"></script>
<!-- <script type="text/javascript" src="{{ pathto('_static/sphinx_highlight.js', 1) }}"></script> -->
<script type="text/javascript">
$(document).ready(function() {
if (!Search.out) {
Search.init();
}
});
</script>
{% endblock %}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1,8 +0,0 @@
[theme]
inherit = basic
pygments_style = default
stylesheet = css/theme.css
[options]
link_to_live_contributing_page = false
mathjax_path =

View File

@@ -54,7 +54,7 @@ Learn best practices for developing with LangChain.
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/ecosystem/integrations/) and [dependent repos](/docs/ecosystem/dependents.html).
### [Additional resources](/docs/additional_resources/)
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/additional_resources/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/ecosystem/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
<h3><span style={{color:"#2e8555"}}> Support </span></h3>

View File

@@ -1,216 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# PromptLayer\n",
"\n",
"<img src=\"https://promptlayer.com/logo.png\" height=\"300\">\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"[PromptLayer](https://promptlayer.com) is a an observability platform for prompts and LLMs. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. While PromptLayer does have LLMs that integrate directly with LangChain (eg [`PromptLayerOpenAI`](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/promptlayer_openai)), this callback will be an easier and more feature rich way to integrate PromptLayer with any model on LangChain. \n",
"\n",
"This callback is also the recommended way to connect with PromptLayer when building Chains and Agents on LangChain."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"## Installation and Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!pip install promptlayer --upgrade"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Getting API Credentials\n",
"\n",
"If you have not already create an account on [PromptLayer](https://www.promptlayer.com) and get an API key by clicking on the settings cog in the navbar\n",
"Set it as an environment variabled called `PROMPTLAYER_API_KEY`\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Usage\n",
"\n",
"To get started with `PromptLayerCallbackHandler` is fairly simple, it takes two optional arguments:\n",
"1. `pl_tags` - an optional list of strings that will be tags tracked on PromptLayer\n",
"2. `pl_id_callback` - an optional function that will get a `promptlayer_request_id` as an argument. This id can be used with all of PromptLayers tracking features to track, metadata, scores, and prompt usage."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Simple Example\n",
"\n",
"In this simple example we use `PromptLayerCallbackHandler` with `ChatOpenAI`. We add a PromptLayer tag named `chatopenai`"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"Sure, here's one:\\n\\nWhy did the tomato turn red?\\n\\nBecause it saw the salad dressing!\" additional_kwargs={} example=False\n"
]
}
],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.schema import (\n",
" HumanMessage,\n",
")\n",
"from langchain.callbacks import PromptLayerCallbackHandler\n",
"\n",
"chat_llm = ChatOpenAI(\n",
" temperature=0,\n",
" callbacks=[PromptLayerCallbackHandler(pl_tags=[\"chatopenai\"])],\n",
")\n",
"llm_results = chat_llm(\n",
" [\n",
" HumanMessage(content=\"What comes after 1,2,3 ?\"),\n",
" HumanMessage(content=\"Tell me another joke?\"),\n",
" ]\n",
")\n",
"print(llm_results)\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Full Featured Example\n",
"\n",
"In this example we unlock more of the power of PromptLayer.\n",
"\n",
"We are using the Prompt Registry and fetching the prompt called `example`.\n",
"\n",
"We also define a `pl_id_callback` function that tracks a score, metadata and the prompt used. Read more about tracking on [our docs](docs.promptlayer.com)."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"prompt layer id 6050929\n"
]
},
{
"data": {
"text/plain": [
"'\\nToasterCo.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.llms import OpenAI\n",
"from langchain.callbacks import PromptLayerCallbackHandler\n",
"import promptlayer\n",
"\n",
"def pl_id_callback(promptlayer_request_id):\n",
" print(\"prompt layer id \", promptlayer_request_id)\n",
" promptlayer.track.score(\n",
" request_id=promptlayer_request_id, score=100\n",
" ) # score is an integer 0-100\n",
" promptlayer.track.metadata(\n",
" request_id=promptlayer_request_id, metadata={\"foo\": \"bar\"}\n",
" ) # metadata is a dictionary of key value pairs that is tracked on PromptLayer\n",
" promptlayer.track.prompt(\n",
" request_id=promptlayer_request_id,\n",
" prompt_name=\"example\",\n",
" prompt_input_variables={\"product\": \"toasters\"},\n",
" version=1,\n",
" )\n",
"\n",
"\n",
"openai_llm = OpenAI(\n",
" model_name=\"text-davinci-002\",\n",
" callbacks=[PromptLayerCallbackHandler(pl_id_callback=pl_id_callback)],\n",
")\n",
"\n",
"example_prompt = promptlayer.prompts.get(\"example\", version=1, langchain=True)\n",
"openai_llm(example_prompt.format(product=\"toasters\"))"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"That is all it takes! After setup all your requests will show up on the PromptLayer dasahboard.\n",
"This callback also works with any LLM implemented on LangChain."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "base",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
},
"vscode": {
"interpreter": {
"hash": "c4fe2cd85a8d9e8baaec5340ce66faff1c77581a9f43e6c45e85e09b6fced008"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,6 +1,6 @@
# Document QA
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](/docs/modules/chains/document/).
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](../document.html).
import Example from "@snippets/modules/chains/additional/question_answering.mdx"

View File

@@ -5,7 +5,7 @@ sidebar_position: 1
# Data connection
Many LLM applications require user-specific data that is not part of the model's training set. LangChain gives you the
building blocks to load, transform, store and query your data via:
building blocks to load, transform, and query your data via:
- [Document loaders](/docs/modules/data_connection/document_loaders/): Load documents from many different sources
- [Document transformers](/docs/modules/data_connection/document_transformers/): Split documents, drop redundant documents, and more

View File

@@ -1,9 +0,0 @@
# Caching
LangChain provides an optional caching layer for Chat Models. This is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make to the LLM provider.
import CachingChat from "@snippets/modules/model_io/models/chat/how_to/chat_model_caching.mdx"
<CachingChat/>

View File

@@ -7,10 +7,7 @@ const { ProvidePlugin } = require("webpack");
const path = require("path");
const examplesPath = path.resolve(__dirname, "..", "examples", "src");
const snippetsPath = path.resolve(__dirname, "..", "snippets");
const baseLightCodeBlockTheme = require("prism-react-renderer/themes/vsLight");
const baseDarkCodeBlockTheme = require("prism-react-renderer/themes/vsDark");
const snippetsPath = path.resolve(__dirname, "..", "snippets")
/** @type {import('@docusaurus/types').Config} */
const config = {
@@ -26,8 +23,8 @@ const config = {
// For GitHub pages deployment, it is often '/<projectName>/'
baseUrl: "/",
onBrokenLinks: "warn",
onBrokenMarkdownLinks: "throw",
onBrokenLinks: "ignore",
onBrokenMarkdownLinks: "ignore",
plugins: [
() => ({
@@ -130,20 +127,8 @@ const config = {
},
},
prism: {
theme: {
...baseLightCodeBlockTheme,
plain: {
...baseLightCodeBlockTheme.plain,
backgroundColor: "#F5F5F5",
},
},
darkTheme: {
...baseDarkCodeBlockTheme,
plain: {
...baseDarkCodeBlockTheme.plain,
backgroundColor: "#222222",
},
},
theme: require("prism-react-renderer/themes/vsLight"),
darkTheme: require("prism-react-renderer/themes/vsDark"),
},
image: "img/parrot-chainlink-icon.png",
navbar: {

File diff suppressed because it is too large Load Diff

View File

@@ -1,28 +0,0 @@
# Alibaba Cloud Opensearch
[Alibaba Cloud Opensearch](https://www.alibabacloud.com/product/opensearch) OpenSearch is a one-stop platform to develop intelligent search services. OpenSearch was built based on the large-scale distributed search engine developed by Alibaba. OpenSearch serves more than 500 business cases in Alibaba Group and thousands of Alibaba Cloud customers. OpenSearch helps develop search services in different search scenarios, including e-commerce, O2O, multimedia, the content industry, communities and forums, and big data query in enterprises.
OpenSearch helps you develop high quality, maintenance-free, and high performance intelligent search services to provide your users with high search efficiency and accuracy.
OpenSearch provides the vector search feature. In specific scenarios, especially test question search and image search scenarios, you can use the vector search feature together with the multimodal search feature to improve the accuracy of search results. This topic describes the syntax and usage notes of vector indexes.
## Purchase an instance and configure it
- Purchase OpenSearch Vector Search Edition from [Alibaba Cloud](https://opensearch.console.aliyun.com) and configure the instance according to the help [documentation](https://help.aliyun.com/document_detail/463198.html?spm=a2c4g.465092.0.0.2cd15002hdwavO).
## Alibaba Cloud Opensearch Vector Store Wrappers
supported functions:
- `add_texts`
- `add_documents`
- `from_texts`
- `from_documents`
- `similarity_search`
- `asimilarity_search`
- `similarity_search_by_vector`
- `asimilarity_search_by_vector`
- `similarity_search_with_relevance_scores`
For a more detailed walk through of the Alibaba Cloud OpenSearch wrapper, see [this notebook](../modules/indexes/vectorstores/examples/alibabacloud_opensearch.ipynb)
If you encounter any problems during use, please feel free to contact [xingshaomin.xsm@alibaba-inc.com](xingshaomin.xsm@alibaba-inc.com) , and we will do our best to provide you with assistance and support.

View File

@@ -1,73 +0,0 @@
# Amazon API Gateway
[Amazon API Gateway](https://aws.amazon.com/api-gateway/) is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.
API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.
## LLM
See a [usage example](/docs/modules/model_io/models/llms/integrations/amazon_api_gateway_example.html).
```python
from langchain.llms import AmazonAPIGateway
api_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"
llm = AmazonAPIGateway(api_url=api_url)
# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStart
parameters = {
"max_new_tokens": 100,
"num_return_sequences": 1,
"top_k": 50,
"top_p": 0.95,
"do_sample": False,
"return_full_text": True,
"temperature": 0.2,
}
prompt = "what day comes after Friday?"
llm.model_kwargs = parameters
llm(prompt)
>>> 'what day comes after Friday?\nSaturday'
```
## Agent
```python
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.llms import AmazonAPIGateway
api_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"
llm = AmazonAPIGateway(api_url=api_url)
parameters = {
"max_new_tokens": 50,
"num_return_sequences": 1,
"top_k": 250,
"top_p": 0.25,
"do_sample": False,
"temperature": 0.1,
}
llm.model_kwargs = parameters
# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.
tools = load_tools(["python_repl", "llm-math"], llm=llm)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
# Now let's test it out!
agent.run("""
Write a Python script that prints "Hello, world!"
""")
>>> 'Hello, world!'
```

View File

@@ -1,464 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "944e4194",
"metadata": {},
"source": [
"# Arthur LangChain integration"
]
},
{
"cell_type": "markdown",
"id": "b1ccdfe8",
"metadata": {},
"source": [
"[Arthur](https://www.arthur.ai/) is a model monitoring and observability platform.\n",
"\n",
"This notebook shows how to register LLMs (chat and non-chat) as models with the Arthur platform. Then we show how to set up langchain LLMs with an Arthur callback that will automatically log model inferences to Arthur.\n",
"\n",
"For more information about how to use the Arthur SDK, visit our [docs](http://docs.arthur.ai), in particular our [model onboarding guide](https://docs.arthur.ai/user-guide/walkthroughs/model-onboarding/index.html)"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "961c6691",
"metadata": {},
"outputs": [],
"source": [
"from langchain.callbacks import ArthurCallbackHandler\n",
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
"from langchain.chat_models import ChatOpenAI, ChatAnthropic\n",
"from langchain.schema import HumanMessage\n",
"from langchain.llms import OpenAI, Cohere, HuggingFacePipeline"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a23d1963",
"metadata": {},
"outputs": [],
"source": [
"from arthurai import ArthurAI\n",
"from arthurai.common.constants import InputType, OutputType, Stage, ValueType\n",
"from arthurai.core.attributes import ArthurAttribute, AttributeCategory"
]
},
{
"cell_type": "markdown",
"id": "4d1b90c0",
"metadata": {},
"source": [
"# ArthurModel for chatbot with only input text and output text attributes"
]
},
{
"cell_type": "markdown",
"id": "1a4a4a8a",
"metadata": {},
"source": [
"Connect to Arthur client"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f49e9b79",
"metadata": {},
"outputs": [],
"source": [
"arthur_url = \"https://app.arthur.ai\"\n",
"arthur_login = \"your-username-here\"\n",
"arthur = ArthurAI(url=arthur_url, login=arthur_login)"
]
},
{
"cell_type": "markdown",
"id": "c6e063bf",
"metadata": {},
"source": [
"Before you can register model inferences to Arthur, you must have a registered model with an ID in the Arthur platform. We will provide this ID to the ArthurCallbackHandler.\n",
"\n",
"You can register a model with Arthur here in the notebook using this `register_chat_llm()` function. This function returns the ID of the model saved to the platform. To use the function, uncomment `arthur_model_chatbot_id = register_chat_llm()` in the cell below."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "31b17b5e",
"metadata": {},
"outputs": [],
"source": [
"def register_chat_llm():\n",
"\n",
" arthur_model = arthur.model(\n",
" display_name=\"LangChainChat\",\n",
" input_type=InputType.NLP,\n",
" output_type=OutputType.TokenSequence\n",
" )\n",
"\n",
" arthur_model._add_attribute_to_model(ArthurAttribute(\n",
" name=\"my_input_text\",\n",
" stage=Stage.ModelPipelineInput,\n",
" value_type=ValueType.Unstructured_Text,\n",
" categorical=True,\n",
" is_unique=True\n",
" ))\n",
" arthur_model._add_attribute_to_model(ArthurAttribute(\n",
" name=\"my_output_text\",\n",
" stage=Stage.PredictedValue,\n",
" value_type=ValueType.Unstructured_Text,\n",
" categorical=True,\n",
" is_unique=False,\n",
" ))\n",
" \n",
" return arthur_model.save()\n",
"# arthur_model_chatbot_id = register_chat_llm()"
]
},
{
"cell_type": "markdown",
"id": "0d1d1e60",
"metadata": {},
"source": [
"Alternatively, you can set the `arthur_model_chatbot_id` variable to be the ID of your model on your [model dashboard](https://app.arthur.ai/)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cdfa02c8",
"metadata": {},
"outputs": [],
"source": [
"arthur_model_chatbot_id = \"your-model-id-here\""
]
},
{
"cell_type": "markdown",
"id": "58be5234",
"metadata": {},
"source": [
"This function creates a Langchain chat LLM with the ArthurCallbackHandler to log inferences to Arthur. We provide our `arthur_model_chatbot_id`, as well as the Arthur url and login we are using."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "448a8fee",
"metadata": {},
"outputs": [],
"source": [
"def make_langchain_chat_llm(chat_model=ChatOpenAI):\n",
" if chat_model not in [ChatOpenAI, ChatAnthropic]:\n",
" raise ValueError(\"For this notebook, use one of the chat models imported from langchain.chat_models\")\n",
" return chat_model(\n",
" streaming=True, \n",
" temperature=0.1,\n",
" callbacks=[\n",
" StreamingStdOutCallbackHandler(), \n",
" ArthurCallbackHandler.from_credentials(arthur_model_chatbot_id, arthur_url=arthur_url, arthur_login=arthur_login)\n",
" ])\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "17c182da",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2dfc00ed",
"metadata": {},
"outputs": [],
"source": [
"chat_llm = make_langchain_chat_llm()"
]
},
{
"cell_type": "markdown",
"id": "139291f2",
"metadata": {},
"source": [
"Run the chatbot (it will save the chat history in the `history` list so that the conversation can reference earlier messages)\n",
"\n",
"Type `q` to quit"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "7480a443",
"metadata": {},
"outputs": [],
"source": [
"def run_langchain_chat_llm(llm):\n",
" history = []\n",
" while True:\n",
" user_input = input(\"\\n>>> input >>>\\n>>>: \")\n",
" if user_input == 'q': break\n",
" history.append(HumanMessage(content=user_input))\n",
" history.append(llm(history))"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "6868ce71",
"metadata": {},
"outputs": [],
"source": [
"run_langchain_chat_llm(chat_llm)"
]
},
{
"cell_type": "markdown",
"id": "a0be7d01",
"metadata": {},
"source": [
"# ArthurModel with input text, output text, token likelihoods, finish reason, and amount of token usage attributes"
]
},
{
"cell_type": "markdown",
"id": "1ee4b741",
"metadata": {},
"source": [
"This function registers an LLM with additional metadata attributes to log to Arthur with each inference\n",
"\n",
"As above, you can register your callback handler for an LLM using this function here in the notebook or by pasting the ID of an already-registered model from your [model dashboard](https://app.arthur.ai/)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "e671836c",
"metadata": {},
"outputs": [],
"source": [
"def register_llm():\n",
"\n",
" arthur_model = arthur.model(\n",
" display_name=\"LangChainLLM\",\n",
" input_type=InputType.NLP,\n",
" output_type=OutputType.TokenSequence\n",
" )\n",
" arthur_model._add_attribute_to_model(ArthurAttribute(\n",
" name=\"my_input_text\",\n",
" stage=Stage.ModelPipelineInput,\n",
" value_type=ValueType.Unstructured_Text,\n",
" categorical=True,\n",
" is_unique=True\n",
" ))\n",
" arthur_model._add_attribute_to_model(ArthurAttribute(\n",
" name=\"my_output_text\",\n",
" stage=Stage.PredictedValue,\n",
" value_type=ValueType.Unstructured_Text,\n",
" categorical=True,\n",
" is_unique=False,\n",
" token_attribute_link=\"my_output_likelihoods\"\n",
" ))\n",
" arthur_model._add_attribute_to_model(ArthurAttribute(\n",
" name=\"my_output_likelihoods\",\n",
" stage=Stage.PredictedValue,\n",
" value_type=ValueType.TokenLikelihoods,\n",
" token_attribute_link=\"my_output_text\"\n",
" ))\n",
" arthur_model._add_attribute_to_model(ArthurAttribute(\n",
" name=\"finish_reason\",\n",
" stage=Stage.NonInputData,\n",
" value_type=ValueType.String,\n",
" categorical=True,\n",
" categories=[\n",
" AttributeCategory(value='stop'),\n",
" AttributeCategory(value='length'),\n",
" AttributeCategory(value='content_filter'),\n",
" AttributeCategory(value='null')\n",
" ]\n",
" ))\n",
" arthur_model._add_attribute_to_model(ArthurAttribute(\n",
" name=\"prompt_tokens\",\n",
" stage=Stage.NonInputData,\n",
" value_type=ValueType.Integer\n",
" ))\n",
" arthur_model._add_attribute_to_model(ArthurAttribute(\n",
" name=\"completion_tokens\",\n",
" stage=Stage.NonInputData,\n",
" value_type=ValueType.Integer\n",
" ))\n",
" arthur_model._add_attribute_to_model(ArthurAttribute(\n",
" name=\"duration\",\n",
" stage=Stage.NonInputData,\n",
" value_type=ValueType.Float\n",
" ))\n",
" \n",
" return arthur_model.save()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "2a6686f7",
"metadata": {},
"outputs": [],
"source": [
"arthur_model_llm_id = \"your-model-id-here\""
]
},
{
"cell_type": "markdown",
"id": "2dcacb96",
"metadata": {},
"source": [
"These functions create Langchain LLMs with the ArthurCallbackHandler to log inferences to Arthur.\n",
"\n",
"There are small differences in the underlying Langchain integrations with these libraries and the available metadata for model inputs & outputs"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "34cf0072",
"metadata": {},
"outputs": [],
"source": [
"def make_langchain_openai_llm():\n",
" return OpenAI(\n",
" temperature=0.1,\n",
" model_kwargs = {'logprobs': 3},\n",
" callbacks=[\n",
" ArthurCallbackHandler.from_credentials(arthur_model_llm_id, arthur_url=arthur_url, arthur_login=arthur_login)\n",
" ])\n",
"\n",
"def make_langchain_cohere_llm():\n",
" return Cohere(\n",
" temperature=0.1,\n",
" callbacks=[\n",
" ArthurCallbackHandler.from_credentials(arthur_model_chatbot_id, arthur_url=arthur_url, arthur_login=arthur_login)\n",
" ])\n",
"\n",
"def make_langchain_huggingface_llm():\n",
" llm = HuggingFacePipeline.from_model_id(\n",
" model_id=\"bert-base-uncased\", \n",
" task=\"text-generation\", \n",
" model_kwargs={\"temperature\":2.5, \"max_length\":64})\n",
" llm.callbacks = [\n",
" ArthurCallbackHandler.from_credentials(arthur_model_chatbot_id, arthur_url=arthur_url, arthur_login=arthur_login)\n",
" ]\n",
" return llm"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "f40c3ce0",
"metadata": {},
"outputs": [],
"source": [
"openai_llm = make_langchain_openai_llm()"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "8476d531",
"metadata": {},
"outputs": [],
"source": [
"cohere_llm = make_langchain_cohere_llm()"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "7483b9d3",
"metadata": {},
"outputs": [],
"source": [
"huggingface_llm = make_langchain_huggingface_llm()"
]
},
{
"cell_type": "markdown",
"id": "c17d8e86",
"metadata": {},
"source": [
"Run the LLM (each completion is independent, no chat history is saved as we were doing above with the chat llms)\n",
"\n",
"Type `q` to quit"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "72ee0790",
"metadata": {},
"outputs": [],
"source": [
"def run_langchain_llm(llm):\n",
" while True:\n",
" print(\"Type your text for completion:\\n\")\n",
" user_input = input(\"\\n>>> input >>>\\n>>>: \")\n",
" if user_input == 'q': break\n",
" print(llm(user_input), \"\\n================\\n\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "fb864057",
"metadata": {},
"outputs": [],
"source": [
"run_langchain_llm(openai_llm)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "e6673769",
"metadata": {},
"outputs": [],
"source": [
"run_langchain_llm(cohere_llm)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "85541f1c",
"metadata": {},
"outputs": [],
"source": [
"run_langchain_llm(huggingface_llm)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -18,4 +18,4 @@ whether for semantic search or example selection.
from langchain.vectorstores import AwaDB
```
For a more detailed walkthrough of the AwaDB wrapper, see [here](/docs/modules/data_connection/vectorstores/integrations/awadb.html).
For a more detailed walkthrough of the AwaDB wrapper, see [this notebook](../modules/indexes/vectorstores/examples/awadb.ipynb)

View File

@@ -1,21 +1,19 @@
# Cassandra
>[Apache Cassandra®](https://cassandra.apache.org/) is a free and open-source, distributed, wide-column
>[Cassandra](https://en.wikipedia.org/wiki/Apache_Cassandra) is a free and open-source, distributed, wide-column
> store, NoSQL database management system designed to handle large amounts of data across many commodity servers,
> providing high availability with no single point of failure. Cassandra offers support for clusters spanning
> providing high availability with no single point of failure. `Cassandra` offers support for clusters spanning
> multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients.
> Cassandra was designed to implement a combination of _Amazon's Dynamo_ distributed storage and replication
> techniques combined with _Google's Bigtable_ data and storage engine model.
> `Cassandra` was designed to implement a combination of `Amazon's Dynamo` distributed storage and replication
> techniques combined with `Google's Bigtable` data and storage engine model.
## Installation and Setup
```bash
pip install cassandra-driver
pip install cassio
pip install cassandra-drive
```
## Memory
See a [usage example](/docs/modules/memory/integrations/cassandra_chat_message_history.html).

View File

@@ -12,12 +12,12 @@ Databricks embraces the LangChain ecosystem in various ways:
Databricks connector for the SQLDatabase Chain
----------------------------------------------
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. See the notebook [Connect to Databricks](/docs/ecosystem/integrations/databricks/databricks.html) for details.
You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain. See the notebook [Connect to Databricks](./databricks/databricks.html) for details.
Databricks-managed MLflow integrates with LangChain
---------------------------------------------------
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](/docs/ecosystem/integrations/mlflow_tracking.ipynb) for details about MLflow's integration with LangChain.
MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. See the notebook [MLflow Callback Handler](./mlflow_tracking.ipynb) for details about MLflow's integration with LangChain.
Databricks provides a fully managed and hosted version of MLflow integrated with enterprise security features, high availability, and other Databricks workspace features such as experiment and run management and notebook revision capture. MLflow on Databricks offers an integrated experience for tracking and securing machine learning model training runs and running machine learning projects. See [MLflow guide](https://docs.databricks.com/mlflow/index.html) for more details.
@@ -26,11 +26,11 @@ Databricks-managed MLflow makes it more convenient to develop LangChain applicat
Databricks as an LLM provider
-----------------------------
The notebook [Wrap Databricks endpoints as LLMs](/docs/modules/model_io/models/llms/integrations/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
The notebook [Wrap Databricks endpoints as LLMs](../modules/models/llms/integrations/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.
Databricks Dolly
----------------
Databricks Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/modules/model_io/models/llms/integrations/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.
Databricks Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](../modules/models/llms/integrations/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.

View File

@@ -0,0 +1,273 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "707d13a7",
"metadata": {},
"source": [
"# Databricks\n",
"\n",
"This notebook covers how to connect to the [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain.\n",
"It is broken into 3 parts: installation and setup, connecting to Databricks, and examples."
]
},
{
"cell_type": "markdown",
"id": "0076d072",
"metadata": {},
"source": [
"## Installation and Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "739b489b",
"metadata": {},
"outputs": [],
"source": [
"!pip install databricks-sql-connector"
]
},
{
"cell_type": "markdown",
"id": "73113163",
"metadata": {},
"source": [
"## Connecting to Databricks\n",
"\n",
"You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the `SQLDatabase.from_databricks()` method.\n",
"\n",
"### Syntax\n",
"```python\n",
"SQLDatabase.from_databricks(\n",
" catalog: str,\n",
" schema: str,\n",
" host: Optional[str] = None,\n",
" api_token: Optional[str] = None,\n",
" warehouse_id: Optional[str] = None,\n",
" cluster_id: Optional[str] = None,\n",
" engine_args: Optional[dict] = None,\n",
" **kwargs: Any)\n",
"```\n",
"### Required Parameters\n",
"* `catalog`: The catalog name in the Databricks database.\n",
"* `schema`: The schema name in the catalog.\n",
"\n",
"### Optional Parameters\n",
"There following parameters are optional. When executing the method in a Databricks notebook, you don't need to provide them in most of the cases.\n",
"* `host`: The Databricks workspace hostname, excluding 'https://' part. Defaults to 'DATABRICKS_HOST' environment variable or current workspace if in a Databricks notebook.\n",
"* `api_token`: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to 'DATABRICKS_TOKEN' environment variable or a temporary one is generated if in a Databricks notebook.\n",
"* `warehouse_id`: The warehouse ID in the Databricks SQL.\n",
"* `cluster_id`: The cluster ID in the Databricks Runtime. If running in a Databricks notebook and both 'warehouse_id' and 'cluster_id' are None, it uses the ID of the cluster the notebook is attached to.\n",
"* `engine_args`: The arguments to be used when connecting Databricks.\n",
"* `**kwargs`: Additional keyword arguments for the `SQLDatabase.from_uri` method."
]
},
{
"cell_type": "markdown",
"id": "b11c7e48",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8102bca0",
"metadata": {},
"outputs": [],
"source": [
"# Connecting to Databricks with SQLDatabase wrapper\n",
"from langchain import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_databricks(catalog=\"samples\", schema=\"nyctaxi\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9dd36f58",
"metadata": {},
"outputs": [],
"source": [
"# Creating a OpenAI Chat LLM wrapper\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4\")"
]
},
{
"cell_type": "markdown",
"id": "5b5c5f1a",
"metadata": {},
"source": [
"### SQL Chain example\n",
"\n",
"This example demonstrates the use of the [SQL Chain](https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html) for answering a question over a Databricks database."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "36f2270b",
"metadata": {},
"outputs": [],
"source": [
"from langchain import SQLDatabaseChain\n",
"\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4e2b5f25",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"What is the average duration of taxi rides that start between midnight and 6am?\n",
"SQLQuery:\u001b[32;1m\u001b[1;3mSELECT AVG(UNIX_TIMESTAMP(tpep_dropoff_datetime) - UNIX_TIMESTAMP(tpep_pickup_datetime)) as avg_duration\n",
"FROM trips\n",
"WHERE HOUR(tpep_pickup_datetime) >= 0 AND HOUR(tpep_pickup_datetime) < 6\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[(987.8122786304605,)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3mThe average duration of taxi rides that start between midnight and 6am is 987.81 seconds.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The average duration of taxi rides that start between midnight and 6am is 987.81 seconds.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"db_chain.run(\n",
" \"What is the average duration of taxi rides that start between midnight and 6am?\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e496d5e5",
"metadata": {},
"source": [
"### SQL Database Agent example\n",
"\n",
"This example demonstrates the use of the [SQL Database Agent](/docs/modules/agents/toolkits/sql_database.html) for answering questions over a Databricks database."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9918e86a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import create_sql_agent\n",
"from langchain.agents.agent_toolkits import SQLDatabaseToolkit\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
"agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c484a76e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mAction: list_tables_sql_db\n",
"Action Input: \u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mtrips\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI should check the schema of the trips table to see if it has the necessary columns for trip distance and duration.\n",
"Action: schema_sql_db\n",
"Action Input: trips\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m\n",
"CREATE TABLE trips (\n",
"\ttpep_pickup_datetime TIMESTAMP, \n",
"\ttpep_dropoff_datetime TIMESTAMP, \n",
"\ttrip_distance FLOAT, \n",
"\tfare_amount FLOAT, \n",
"\tpickup_zip INT, \n",
"\tdropoff_zip INT\n",
") USING DELTA\n",
"\n",
"/*\n",
"3 rows from trips table:\n",
"tpep_pickup_datetime\ttpep_dropoff_datetime\ttrip_distance\tfare_amount\tpickup_zip\tdropoff_zip\n",
"2016-02-14 16:52:13+00:00\t2016-02-14 17:16:04+00:00\t4.94\t19.0\t10282\t10171\n",
"2016-02-04 18:44:19+00:00\t2016-02-04 18:46:00+00:00\t0.28\t3.5\t10110\t10110\n",
"2016-02-17 17:13:57+00:00\t2016-02-17 17:17:55+00:00\t0.7\t5.0\t10103\t10023\n",
"*/\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe trips table has the necessary columns for trip distance and duration. I will write a query to find the longest trip distance and its duration.\n",
"Action: query_checker_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Observation: \u001b[31;1m\u001b[1;3mSELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe query is correct. I will now execute it to find the longest trip distance and its duration.\n",
"Action: query_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m[(30.6, '0 00:43:31.000000000')]\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the final answer.\n",
"Final Answer: The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is the longest trip distance and how long did it take?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,153 +0,0 @@
# Flyte
> [Flyte](https://github.com/flyteorg/flyte) is an open-source orchestrator that facilitates building production-grade data and ML pipelines.
> It is built for scalability and reproducibility, leveraging Kubernetes as its underlying platform.
The purpose of this notebook is to demonstrate the integration of a `FlyteCallback` into your Flyte task, enabling you to effectively monitor and track your LangChain experiments.
## Installation & Setup
- Install the Flytekit library by running the command `pip install flytekit`.
- Install the Flytekit-Envd plugin by running the command `pip install flytekitplugins-envd`.
- Install LangChain by running the command `pip install langchain`.
- Install [Docker](https://docs.docker.com/engine/install/) on your system.
## Flyte Tasks
A Flyte [task](https://docs.flyte.org/projects/cookbook/en/latest/auto/core/flyte_basics/task.html) serves as the foundational building block of Flyte.
To execute LangChain experiments, you need to write Flyte tasks that define the specific steps and operations involved.
NOTE: The [getting started guide](https://docs.flyte.org/projects/cookbook/en/latest/index.html) offers detailed, step-by-step instructions on installing Flyte locally and running your initial Flyte pipeline.
First, import the necessary dependencies to support your LangChain experiments.
```python
import os
from flytekit import ImageSpec, task
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import FlyteCallbackHandler
from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.schema import HumanMessage
```
Set up the necessary environment variables to utilize the OpenAI API and Serp API:
```python
# Set OpenAI API key
os.environ["OPENAI_API_KEY"] = "<your_openai_api_key>"
# Set Serp API key
os.environ["SERPAPI_API_KEY"] = "<your_serp_api_key>"
```
Replace `<your_openai_api_key>` and `<your_serp_api_key>` with your respective API keys obtained from OpenAI and Serp API.
To guarantee reproducibility of your pipelines, Flyte tasks are containerized.
Each Flyte task must be associated with an image, which can either be shared across the entire Flyte [workflow](https://docs.flyte.org/projects/cookbook/en/latest/auto/core/flyte_basics/basic_workflow.html) or provided separately for each task.
To streamline the process of supplying the required dependencies for each Flyte task, you can initialize an [`ImageSpec`](https://docs.flyte.org/projects/cookbook/en/latest/auto/core/image_spec/image_spec.html) object.
This approach automatically triggers a Docker build, alleviating the need for users to manually create a Docker image.
```python
custom_image = ImageSpec(
name="langchain-flyte",
packages=[
"langchain",
"openai",
"spacy",
"https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.5.0/en_core_web_sm-3.5.0.tar.gz",
"textstat",
"google-search-results",
],
registry="<your-registry>",
)
```
You have the flexibility to push the Docker image to a registry of your preference.
[Docker Hub](https://hub.docker.com/) or [GitHub Container Registry (GHCR)](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry) is a convenient option to begin with.
Once you have selected a registry, you can proceed to create Flyte tasks that log the LangChain metrics to Flyte Deck.
The following examples demonstrate tasks related to OpenAI LLM, chains and agent with tools:
### LLM
```python
@task(disable_deck=False, container_image=custom_image)
def langchain_llm() -> str:
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0.2,
callbacks=[FlyteCallbackHandler()],
)
return llm([HumanMessage(content="Tell me a joke")]).content
```
### Chain
```python
@task(disable_deck=False, container_image=custom_image)
def langchain_chain() -> list[dict[str, str]]:
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
callbacks=[FlyteCallbackHandler()],
)
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(
llm=llm, prompt=prompt_template, callbacks=[FlyteCallbackHandler()]
)
test_prompts = [
{
"title": "documentary about good video games that push the boundary of game design"
},
]
return synopsis_chain.apply(test_prompts)
```
### Agent
```python
@task(disable_deck=False, container_image=custom_image)
def langchain_agent() -> str:
llm = OpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
callbacks=[FlyteCallbackHandler()],
)
tools = load_tools(
["serpapi", "llm-math"], llm=llm, callbacks=[FlyteCallbackHandler()]
)
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
callbacks=[FlyteCallbackHandler()],
verbose=True,
)
return agent.run(
"Who is Leonardo DiCaprio's girlfriend? Could you calculate her current age and raise it to the power of 0.43?"
)
```
These tasks serve as a starting point for running your LangChain experiments within Flyte.
## Execute the Flyte Tasks on Kubernetes
To execute the Flyte tasks on the configured Flyte backend, use the following command:
```bash
pyflyte run --image <your-image> langchain_flyte.py langchain_llm
```
This command will initiate the execution of the `langchain_llm` task on the Flyte backend. You can trigger the remaining two tasks in a similar manner.
The metrics will be displayed on the Flyte UI as follows:
![LangChain LLM](https://ik.imagekit.io/c8zl7irwkdda/Screenshot_2023-06-20_at_1.23.29_PM_MZYeG0dKa.png?updatedAt=1687247642993)

View File

@@ -29,4 +29,4 @@ from langchain.agents import load_tools
tools = load_tools(["google-search"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)

View File

@@ -70,4 +70,4 @@ from langchain.agents import load_tools
tools = load_tools(["google-serper"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)

View File

@@ -1,44 +0,0 @@
# Grobid
This page covers how to use the Grobid to parse articles for LangChain.
It is seperated into two parts: installation and running the server
## Installation and Setup
#Ensure You have Java installed
!apt-get install -y openjdk-11-jdk -q
!update-alternatives --set java /usr/lib/jvm/java-11-openjdk-amd64/bin/java
#Clone and install the Grobid Repo
import os
!git clone https://github.com/kermitt2/grobid.git
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.chdir('grobid')
!./gradlew clean install
#Run the server,
get_ipython().system_raw('nohup ./gradlew run > grobid.log 2>&1 &')
You can now use the GrobidParser to produce documents
```python
from langchain.document_loaders.parsers import GrobidParser
from langchain.document_loaders.generic import GenericLoader
#Produce chunks from article paragraphs
loader = GenericLoader.from_filesystem(
"/Users/31treehaus/Desktop/Papers/",
glob="*",
suffixes=[".pdf"],
parser= GrobidParser(segment_sentences=False)
)
docs = loader.load()
#Produce chunks from article sentences
loader = GenericLoader.from_filesystem(
"/Users/31treehaus/Desktop/Papers/",
glob="*",
suffixes=[".pdf"],
parser= GrobidParser(segment_sentences=True)
)
docs = loader.load()
```
Chunk metadata will include bboxes although these are a bit funky to parse, see https://grobid.readthedocs.io/en/latest/Coordinates-in-PDF/

View File

@@ -1,23 +0,0 @@
# Hologres
>[Hologres](https://www.alibabacloud.com/help/en/hologres/latest/introduction) is a unified real-time data warehousing service developed by Alibaba Cloud. You can use Hologres to write, update, process, and analyze large amounts of data in real time.
>`Hologres` supports standard `SQL` syntax, is compatible with `PostgreSQL`, and supports most PostgreSQL functions. Hologres supports online analytical processing (OLAP) and ad hoc analysis for up to petabytes of data, and provides high-concurrency and low-latency online data services.
>`Hologres` provides **vector database** functionality by adopting [Proxima](https://www.alibabacloud.com/help/en/hologres/latest/vector-processing).
>`Proxima` is a high-performance software library developed by `Alibaba DAMO Academy`. It allows you to search for the nearest neighbors of vectors. Proxima provides higher stability and performance than similar open source software such as Faiss. Proxima allows you to search for similar text or image embeddings with high throughput and low latency. Hologres is deeply integrated with Proxima to provide a high-performance vector search service.
## Installation and Setup
Click [here](https://www.alibabacloud.com/zh/product/hologres) to fast deploy a Hologres cloud instance.
```bash
pip install psycopg2
```
## Vector Store
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/hologres.html).
```python
from langchain.vectorstores import Hologres
```

View File

@@ -66,4 +66,4 @@ For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_
The Hugging Face Hub has lots of great [datasets](https://huggingface.co/datasets) that can be used to evaluate your LLM chains.
For a detailed walkthrough of how to use them to do so, see [this notebook](/docs/use_cases/evaluation/huggingface_datasets.html)
For a detailed walkthrough of how to use them to do so, see [this notebook](../use_cases/evaluation/huggingface_datasets.html)

View File

@@ -1,35 +0,0 @@
# Infino
>[Infino](https://github.com/infinohq/infino) is an open-source observability platform that stores both metrics and application logs together.
Key features of infino include:
- Metrics Tracking: Capture time taken by LLM model to handle request, errors, number of tokens, and costing indication for the particular LLM.
- Data Tracking: Log and store prompt, request, and response data for each LangChain interaction.
- Graph Visualization: Generate basic graphs over time, depicting metrics such as request duration, error occurrences, token count, and cost.
## Installation and Setup
First, you'll need to install the `infinopy` Python package as follows:
```bash
pip install infinopy
```
If you already have an Infino Server running, then you're good to go; but if
you don't, follow the next steps to start it:
- Make sure you have Docker installed
- Run the following in your terminal:
```
docker run --rm --detach --name infino-example -p 3000:3000 infinohq/infino:latest
```
## Using Infino
See a [usage example of `InfinoCallbackHandler`](/docs/modules/callbacks/integrations/infino.html).
```python
from langchain.callbacks import InfinoCallbackHandler
```

View File

@@ -1,50 +0,0 @@
# Motherduck
>[Motherduck](https://motherduck.com/) is a managed DuckDB-in-the-cloud service.
## Installation and Setup
First, you need to install `duckdb` python package.
```bash
pip install duckdb
```
You will also need to sign up for an account at [Motherduck](https://motherduck.com/)
After that, you should set up a connection string - we mostly integrate with Motherduck through SQLAlchemy.
The connection string is likely in the form:
```
token="..."
conn_str = f"duckdb:///md:{token}@my_db"
```
## SQLChain
You can use the SQLChain to query data in your Motherduck instance in natural language.
```
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
db = SQLDatabase.from_uri(conn_str)
db_chain = SQLDatabaseChain.from_llm(OpenAI(temperature=0), db, verbose=True)
```
From here, see the [SQL Chain](/docs/modules/chains/popular/sqlite.html) documentation on how to use.
## LLMCache
You can also easily use Motherduck to cache LLM requests.
Once again this is done through the SQLAlchemy wrapper.
```
import sqlalchemy
eng = sqlalchemy.create_engine(conn_str)
langchain.llm_cache = SQLAlchemyCache(engine=eng)
```
From here, see the [LLM Caching](/docs/modules/model_io/models/llms/how_to/llm_caching) documentation on how to use.

View File

@@ -25,7 +25,7 @@ There are two ways to set up parameters for myscale index.
1. Environment Variables
Before you run the app, please set the environment variable with `export`:
`export MYSCALE_HOST='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...`
`export MYSCALE_URL='<your-endpoints-url>' MYSCALE_PORT=<your-endpoints-port> MYSCALE_USERNAME=<your-username> MYSCALE_PASSWORD=<your-password> ...`
You can easily find your account, password and other info on our SaaS. For details please refer to [this document](https://docs.myscale.com/en/cluster-management/)
Every attributes under `MyScaleSettings` can be set with prefix `MYSCALE_` and is case insensitive.

View File

@@ -1,70 +0,0 @@
# OpenLLM
This page demonstrates how to use [OpenLLM](https://github.com/bentoml/OpenLLM)
with LangChain.
`OpenLLM` is an open platform for operating large language models (LLMs) in
production. It enables developers to easily run inference with any open-source
LLMs, deploy to the cloud or on-premises, and build powerful AI apps.
## Installation and Setup
Install the OpenLLM package via PyPI:
```bash
pip install openllm
```
## LLM
OpenLLM supports a wide range of open-source LLMs as well as serving users' own
fine-tuned LLMs. Use `openllm model` command to see all available models that
are pre-optimized for OpenLLM.
## Wrappers
There is a OpenLLM Wrapper which supports loading LLM in-process or accessing a
remote OpenLLM server:
```python
from langchain.llms import OpenLLM
```
### Wrapper for OpenLLM server
This wrapper supports connecting to an OpenLLM server via HTTP or gRPC. The
OpenLLM server can run either locally or on the cloud.
To try it out locally, start an OpenLLM server:
```bash
openllm start flan-t5
```
Wrapper usage:
```python
from langchain.llms import OpenLLM
llm = OpenLLM(server_url='http://localhost:3000')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
```
### Wrapper for Local Inference
You can also use the OpenLLM wrapper to load LLM in current Python process for
running inference.
```python
from langchain.llms import OpenLLM
llm = OpenLLM(model_name="dolly-v2", model_id='databricks/dolly-v2-7b')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
```
### Usage
For a more detailed walkthrough of the OpenLLM Wrapper, see the
[example notebook](/docs/modules/model_io/models/llms/integrations/openllm.html)

View File

@@ -41,4 +41,4 @@ from langchain.agents import load_tools
tools = load_tools(["openweathermap-api"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)

View File

@@ -40,7 +40,7 @@ for res in llm_results.generations:
```
You can use the PromptLayer request ID to add a prompt, score, or other metadata to your request. [Read more about it here](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai.html) LLM, except that
This LLM is identical to the [OpenAI LLM](./openai.md), except that
- all your requests will be logged to your PromptLayer account
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).

View File

@@ -1,19 +0,0 @@
# Rockset
>[Rockset](https://rockset.com/product/) is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters.
## Installation and Setup
Make sure you have Rockset account and go to the web console to get the API key. Details can be found on [the website](https://rockset.com/docs/rest-api/).
```bash
pip install rockset
```
## Vector Store
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/rockset.html).
```python
from langchain.vectorstores import RocksetDB
```

View File

@@ -87,4 +87,4 @@ arxiv_tool = SearxSearchResults(name="Arxiv", wrapper=wrapper,
})
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on tools, see [this page](../modules/agents/tools/getting_started.md)

View File

@@ -1,20 +0,0 @@
# SingleStoreDB
>[SingleStoreDB](https://singlestore.com/) is a high-performance distributed SQL database that supports deployment both in the [cloud](https://www.singlestore.com/cloud/) and on-premises. It provides vector storage, and vector functions including [dot_product](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/dot_product.html) and [euclidean_distance](https://docs.singlestore.com/managed-service/en/reference/sql-reference/vector-functions/euclidean_distance.html), thereby supporting AI applications that require text similarity matching.
## Installation and Setup
There are several ways to establish a [connection](https://singlestoredb-python.labs.singlestore.com/generated/singlestoredb.connect.html) to the database. You can either set up environment variables or pass named parameters to the `SingleStoreDB constructor`.
Alternatively, you may provide these parameters to the `from_documents` and `from_texts` methods.
```bash
pip install singlestoredb
```
## Vector Store
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/singlestoredb.html).
```python
from langchain.vectorstores import SingleStoreDB
```

View File

@@ -1,14 +1,15 @@
# scikit-learn
>[scikit-learn](https://scikit-learn.org/stable/) is an open source collection of machine learning algorithms,
> including some implementations of the [k nearest neighbors](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html). `SKLearnVectorStore` wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.
This page covers how to use the scikit-learn package within LangChain.
It is broken into two parts: installation and setup, and then references to specific scikit-learn wrappers.
## Installation and Setup
- Install the Python package with `pip install scikit-learn`
## Wrappers
## Vector Store
### VectorStore
`SKLearnVectorStore` provides a simple wrapper around the nearest neighbor implementation in the
scikit-learn package, allowing you to use it as a vectorstore.

View File

@@ -1,21 +0,0 @@
# StarRocks
>[StarRocks](https://www.starrocks.io/) is a High-Performance Analytical Database.
`StarRocks` is a next-gen sub-second MPP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics and ad-hoc query.
>Usually `StarRocks` is categorized into OLAP, and it has showed excellent performance in [ClickBench — a Benchmark For Analytical DBMS](https://benchmark.clickhouse.com/). Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.
## Installation and Setup
```bash
pip install pymysql
```
## Vector Store
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/starrocks.html).
```python
from langchain.vectorstores import StarRocks
```

View File

@@ -1,19 +0,0 @@
# Tigris
> [Tigris](htttps://tigrisdata.com) is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
> `Tigris` eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.
## Installation and Setup
```bash
pip install tigrisdb openapi-schema-pydantic openai tiktoken
```
## Vector Store
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/tigris.html).
```python
from langchain.vectorstores import Tigris
```

View File

@@ -1,22 +0,0 @@
# Typesense
> [Typesense](https://typesense.org) is an open source, in-memory search engine, that you can either
> [self-host](https://typesense.org/docs/guide/install-typesense.html#option-2-local-machine-self-hosting) or run
> on [Typesense Cloud](https://cloud.typesense.org/).
> `Typesense` focuses on performance by storing the entire index in RAM (with a backup on disk) and also
> focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.
## Installation and Setup
```bash
pip install typesense openapi-schema-pydantic openai tiktoken
```
## Vector Store
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/typesense.html).
```python
from langchain.vectorstores import Typesense
```

View File

@@ -23,15 +23,11 @@ its dependencies running locally.
If you want to get up and running with less set up, you can
simply run `pip install unstructured` and use `UnstructuredAPIFileLoader` or
`UnstructuredAPIFileIOLoader`. That will process your document using the hosted Unstructured API.
The Unstructured API requires API keys to make requests.
You can generate a free API key [here](https://www.unstructured.io/api-key) and start using it today!
Checkout the README [here](https://github.com/Unstructured-IO/unstructured-api) here to get started making API calls.
We'd love to hear your feedback, let us know how it goes in our [community slack](https://join.slack.com/t/unstructuredw-kbe4326/shared_invite/zt-1x7cgo0pg-PTptXWylzPQF9xZolzCnwQ).
And stay tuned for improvements to both quality and performance!
Check out the instructions
[here](https://github.com/Unstructured-IO/unstructured-api#dizzy-instructions-for-using-the-docker-image) if you'd like to self-host the Unstructured API or run it locally.
Note that currently (as of 1 May 2023) the Unstructured API is open, but it will soon require
an API. The [Unstructured documentation page](https://unstructured-io.github.io/) will have
instructions on how to generate an API key once they're available. Check out the instructions
[here](https://github.com/Unstructured-IO/unstructured-api#dizzy-instructions-for-using-the-docker-image)
if you'd like to self-host the Unstructured API or run it locally.
## Wrappers

View File

@@ -54,7 +54,7 @@ The results are returned as a list of relevant documents, and a relevance score
For a more detailed examples of using the Vectara wrapper, see one of these two sample notebooks:
* [Chat Over Documents with Vectara](./vectara_chat.html)
* [Vectara Text Generation](./vectara_text_generation.html)
* [Chat Over Documents with Vectara](./vectara/vectara_chat.html)
* [Vectara Text Generation](./vectara/vectara_text_generation.html)

View File

@@ -36,4 +36,4 @@ from langchain.agents import load_tools
tools = load_tools(["wolfram-alpha"])
```
For more information on tools, see [this page](/docs/modules/agents/tools/).
For more information on this, see [this page](/docs/modules/agents/tools/getting_started.md)

View File

@@ -21,8 +21,7 @@ This guide aims to provide a comprehensive overview of the requirements for depl
Understanding these components is crucial when assessing serving systems. LangChain integrates with several open-source projects designed to tackle these issues, providing a robust framework for productionizing your LLM applications. Some notable frameworks include:
- [Ray Serve](/docs/ecosystem/integrations/ray_serve.html)
- [BentoML](https://github.com/bentoml/BentoML)
- [OpenLLM](/docs/ecosystem/integrations/openllm.html)
- [BentoML](https://github.com/ssheng/BentoChain)
- [Modal](/docs/ecosystem/integrations/modal.html)
These links will provide further information on each ecosystem, assisting you in finding the best fit for your LLM deployment needs.
@@ -111,4 +110,4 @@ Rapid iteration also involves the ability to recreate your infrastructure quickl
## CI/CD
In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration.
In a fast-paced environment, implementing CI/CD pipelines can significantly speed up the iteration process. They help automate the testing and deployment of your LLM applications, reducing the risk of errors and enabling faster feedback and iteration.

View File

@@ -67,11 +67,6 @@ This repository allows users to serve local chains and agents as RESTful, gRPC,
This repository provides an example of how to deploy a LangChain application with [BentoML](https://github.com/bentoml/BentoML). BentoML is a framework that enables the containerization of machine learning applications as standard OCI images. BentoML also allows for the automatic generation of OpenAPI and gRPC endpoints. With BentoML, you can integrate models from all popular ML frameworks and deploy them as microservices running on the most optimal hardware and scaling independently.
## [OpenLLM](https://github.com/bentoml/OpenLLM)
OpenLLM is a platform for operating large language models (LLMs) in production. With OpenLLM, you can run inference with any open-source LLM, deploy to the cloud or on-premises, and build powerful AI apps. It supports a wide range of open-source LLMs, offers flexible APIs, and first-class support for LangChain and BentoML.
See OpenLLM's [integration doc](https://github.com/bentoml/OpenLLM#%EF%B8%8F-integrations) for usage with LangChain.
## [Databutton](https://databutton.com/home?new-data-app=true)
These templates serve as examples of how to build, deploy, and share LangChain applications using Databutton. You can create user interfaces with Streamlit, automate tasks by scheduling Python code, and store files and data in the built-in store. Examples include a Chatbot interface with conversational memory, a Personal search engine, and a starter template for LangChain apps. Deploying and sharing is just one click away.

View File

@@ -1,448 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Comparing Chain Outputs\n",
"\n",
"Suppose you have two different prompts (or LLMs). How do you know which will generate \"better\" results?\n",
"\n",
"One automated way to predict the preferred configuration is to use a `PairwiseStringEvaluator` like the `PairwiseStringEvalChain`<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1). This chain prompts an LLM to select which output is preferred, given a specific input.\n",
"\n",
"For this evalution, we will need 3 things:\n",
"1. An evaluator\n",
"2. A dataset of inputs\n",
"3. 2 (or more) LLMs, Chains, or Agents to compare\n",
"\n",
"Then we will aggregate the restults to determine the preferred model.\n",
"\n",
"### Step 1. Create the Evaluator\n",
"\n",
"In this example, you will use gpt-4 to select which output is preferred."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Optional if you are tracing the notebook\n",
"%env LANGCHAIN_PROJECT=\"Comparing Chain Outputs\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.evaluation.comparison import PairwiseStringEvalChain\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4\")\n",
"\n",
"eval_chain = PairwiseStringEvalChain.from_llm(llm=llm)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 2. Select Dataset\n",
"\n",
"If you already have real usage data for your LLM, you can use a representative sample. More examples\n",
"provide more reliable results. We will use some example queries someone might have about how to use langchain here."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Found cached dataset parquet (/Users/wfh/.cache/huggingface/datasets/LangChainDatasets___parquet/LangChainDatasets--langchain-howto-queries-bbb748bbee7e77aa/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "d852a1884480457292c90d8bd9d4f1e6",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/1 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from langchain.evaluation.loading import load_dataset\n",
"\n",
"dataset = load_dataset(\"langchain-howto-queries\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 3. Define Models to Compare\n",
"\n",
"We will be comparing two agents in this case."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain import SerpAPIWrapper\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents import AgentType\n",
"from langchain.chat_models import ChatOpenAI\n",
"\n",
"\n",
"# Initialize the language model\n",
"# You can add your own OpenAI API key by adding openai_api_key=\"<your_api_key>\" \n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\")\n",
"\n",
"# Initialize the SerpAPIWrapper for search functionality\n",
"#Replace <your_api_key> in openai_api_key=\"<your_api_key>\" with your actual SerpAPI key.\n",
"search = SerpAPIWrapper()\n",
"\n",
"# Define a list of tools offered by the agent\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" coroutine=search.arun,\n",
" description=\"Useful when you need to answer questions about current events. You should ask targeted questions.\"\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"functions_agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_MULTI_FUNCTIONS, verbose=False)\n",
"conversations_agent = initialize_agent(tools, llm, agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=False)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Step 4. Generate Responses\n",
"\n",
"We will generate outputs for each of the models before evaluating them."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "b076d6bf6680422aa9082d4bad4d98a3",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
" 0%| | 0/20 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..\n",
"Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..\n"
]
}
],
"source": [
"from tqdm.notebook import tqdm\n",
"import asyncio\n",
"\n",
"results = []\n",
"agents = [functions_agent, conversations_agent]\n",
"concurrency_level = 6 # How many concurrent agents to run. May need to decrease if OpenAI is rate limiting.\n",
"\n",
"# We will only run the first 20 examples of this dataset to speed things up\n",
"# This will lead to larger confidence intervals downstream.\n",
"batch = []\n",
"for example in tqdm(dataset[:20]):\n",
" batch.extend([agent.acall(example['inputs']) for agent in agents])\n",
" if len(batch) >= concurrency_level:\n",
" batch_results = await asyncio.gather(*batch, return_exceptions=True)\n",
" results.extend(list(zip(*[iter(batch_results)]*2)))\n",
" batch = []\n",
"if batch:\n",
" batch_results = await asyncio.gather(*batch, return_exceptions=True)\n",
" results.extend(list(zip(*[iter(batch_results)]*2)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Step 5. Evaluate Pairs\n",
"\n",
"Now it's time to evaluate the results. For each agent response, run the evaluation chain to select which output is preferred (or return a tie).\n",
"\n",
"Randomly select the input order to reduce the likelihood that one model will be preferred just because it is presented first."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import random\n",
"\n",
"def predict_preferences(dataset, results) -> list:\n",
" preferences = []\n",
"\n",
" for example, (res_a, res_b) in zip(dataset, results):\n",
" input_ = example['inputs']\n",
" # Flip a coin to reduce persistent position bias\n",
" if random.random() < 0.5:\n",
" pred_a, pred_b = res_a, res_b\n",
" a, b = \"a\", \"b\"\n",
" else:\n",
" pred_a, pred_b = res_b, res_a\n",
" a, b = \"b\", \"a\"\n",
" eval_res = eval_chain.evaluate_string_pairs(\n",
" prediction=pred_a['output'] if isinstance(pred_a, dict) else str(pred_a),\n",
" prediction_b=pred_b['output'] if isinstance(pred_b, dict) else str(pred_b),\n",
" input=input_\n",
" )\n",
" if eval_res[\"value\"] == \"A\":\n",
" preferences.append(a)\n",
" elif eval_res[\"value\"] == \"B\":\n",
" preferences.append(b)\n",
" else:\n",
" preferences.append(None) # No preference\n",
" return preferences"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"preferences = predict_preferences(dataset, results)"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"**Print out the ratio of preferences.**"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI Functions Agent: 90.00%\n",
"Structured Chat Agent: 10.00%\n"
]
}
],
"source": [
"from collections import Counter\n",
"\n",
"name_map = {\n",
" \"a\": \"OpenAI Functions Agent\",\n",
" \"b\": \"Structured Chat Agent\",\n",
"}\n",
"counts = Counter(preferences)\n",
"pref_ratios = {\n",
" k: v/len(preferences) for k, v in\n",
" counts.items()\n",
"}\n",
"for k, v in pref_ratios.items():\n",
" print(f\"{name_map.get(k)}: {v:.2%}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Estimate Confidence Intervals\n",
"\n",
"The results seem pretty clear, but if you want to have a better sense of how confident we are, that model \"A\" (the OpenAI Functions Agent) is the preferred model, we can calculate confidence intervals. \n",
"\n",
"Below, use the Wilson score to estimate the confidence interval."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from math import sqrt\n",
"\n",
"def wilson_score_interval(preferences: list, which: str = \"a\", z: float = 1.96) -> tuple:\n",
" \"\"\"Estimate the confidence interval using the Wilson score.\n",
" \n",
" See: https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Wilson_score_interval\n",
" for more details, including when to use it and when it should not be used.\n",
" \"\"\"\n",
" total_preferences = preferences.count('a') + preferences.count('b')\n",
" n_s = preferences.count(which)\n",
"\n",
" if total_preferences == 0:\n",
" return (0, 0)\n",
"\n",
" p_hat = n_s / total_preferences\n",
"\n",
" denominator = 1 + (z**2) / total_preferences\n",
" adjustment = (z / denominator) * sqrt(p_hat*(1-p_hat)/total_preferences + (z**2)/(4*total_preferences*total_preferences))\n",
" center = (p_hat + (z**2) / (2*total_preferences)) / denominator\n",
" lower_bound = min(max(center - adjustment, 0.0), 1.0)\n",
" upper_bound = min(max(center + adjustment, 0.0), 1.0)\n",
"\n",
" return (lower_bound, upper_bound)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The \"OpenAI Functions Agent\" would be preferred between 69.90% and 97.21% percent of the time (with 95% confidence).\n",
"The \"Structured Chat Agent\" would be preferred between 2.79% and 30.10% percent of the time (with 95% confidence).\n"
]
}
],
"source": [
"for which_, name in name_map.items():\n",
" low, high = wilson_score_interval(preferences, which=which_)\n",
" print(f'The \"{name}\" would be preferred between {low:.2%} and {high:.2%} percent of the time (with 95% confidence).')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Print out the p-value.**"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The p-value is 0.00040. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models),\n",
"then there is a 0.04025% chance of observing the OpenAI Functions Agent be preferred at least 18\n",
"times out of 20 trials.\n"
]
}
],
"source": [
"from scipy import stats\n",
"preferred_model = max(pref_ratios, key=pref_ratios.get)\n",
"successes = preferences.count(preferred_model)\n",
"n = len(preferences) - preferences.count(None)\n",
"p_value = stats.binom_test(successes, n, p=0.5, alternative='two-sided')\n",
"print(f\"\"\"The p-value is {p_value:.5f}. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models),\n",
"then there is a {p_value:.5%} chance of observing the {name_map.get(preferred_model)} be preferred at least {successes}\n",
"times out of {n} trials.\"\"\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"cite_note-1\"></a>_1. Note: Automated evals are still an open research topic and are best used alongside other evaluation approaches. \n",
"LLM preferences exhibit biases, including banal ones like the order of outputs.\n",
"In choosing preferences, \"ground truth\" may not be taken into account, which may lead to scores that aren't grounded in utility._"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,400 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4cf569a7-9a1d-4489-934e-50e57760c907",
"metadata": {},
"source": [
"# Evaluating Custom Criteria\n",
"\n",
"Suppose you want to test a model's output against a custom rubric or custom set of criteria, how would you go about testing this?\n",
"\n",
"The `CriteriaEvalChain` is a convenient way to predict whether an LLM or Chain's output complies with a set of criteria, so long as you can\n",
"describe those criteria in regular language. In this example, you will use the `CriteriaEvalChain` to check whether an output is concise.\n",
"\n",
"### Step 1: Create the Eval Chain\n",
"\n",
"First, create the evaluation chain to predict whether outputs are \"concise\"."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6005ebe8-551e-47a5-b4df-80575a068552",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.evaluation.criteria import CriteriaEvalChain\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"criterion = \"conciseness\"\n",
"eval_chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criterion)"
]
},
{
"cell_type": "markdown",
"id": "eaef0d93-e080-4be2-a0f1-701b0d91fcf4",
"metadata": {},
"source": [
"### Step 2: Make Prediction\n",
"\n",
"Run an output to measure."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "68b1a348-cf41-40bf-9667-e79683464cf2",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"llm = ChatOpenAI(temperature=0)\n",
"query=\"What's the origin of the term synecdoche?\"\n",
"prediction = llm.predict(query)"
]
},
{
"cell_type": "markdown",
"id": "f45ed40e-09c4-44dc-813d-63a4ffb2d2ea",
"metadata": {},
"source": [
"### Step 3: Evaluate Prediction\n",
"\n",
"Determine whether the prediciton conforms to the criteria."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "22f83fb8-82f4-4310-a877-68aaa0789199",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': '1. Conciseness: The submission is concise and to the point. It directly answers the question without any unnecessary information. Therefore, the submission meets the criterion of conciseness.\\n\\nY', 'value': 'Y', 'score': 1}\n"
]
}
],
"source": [
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
"print(eval_result)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "8c4ec9dd-6557-4f23-8480-c822eb6ec552",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"['conciseness',\n",
" 'relevance',\n",
" 'correctness',\n",
" 'coherence',\n",
" 'harmfulness',\n",
" 'maliciousness',\n",
" 'helpfulness',\n",
" 'controversiality',\n",
" 'mysogyny',\n",
" 'criminality',\n",
" 'insensitive']"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# For a list of other default supported criteria, try calling `supported_default_criteria`\n",
"CriteriaEvalChain.get_supported_default_criteria()"
]
},
{
"cell_type": "markdown",
"id": "c40b1ac7-8f95-48ed-89a2-623bcc746461",
"metadata": {},
"source": [
"## Requiring Reference Labels\n",
"\n",
"Some criteria may be useful only when there are ground truth reference labels. You can pass these in as well."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "20d8a86b-beba-42ce-b82c-d9e5ebc13686",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"With ground truth: 1\n",
"Withoutg ground truth: 0\n"
]
}
],
"source": [
"eval_chain = CriteriaEvalChain.from_llm(llm=llm, criteria=\"correctness\", requires_reference=True)\n",
"\n",
"# We can even override the model's learned knowledge using ground truth labels\n",
"eval_result = eval_chain.evaluate_strings(\n",
" input=\"What is the capital of the US?\",\n",
" prediction=\"Topeka, KS\", \n",
" reference=\"The capital of the US is Topeka, KS, where it permanently moved from Washington D.C. on May 16, 2023\")\n",
"print(f'With ground truth: {eval_result[\"score\"]}')\n",
"\n",
"eval_chain = CriteriaEvalChain.from_llm(llm=llm, criteria=\"correctness\")\n",
"eval_result = eval_chain.evaluate_strings(\n",
" input=\"What is the capital of the US?\",\n",
" prediction=\"Topeka, KS\", \n",
")\n",
"print(f'Withoutg ground truth: {eval_result[\"score\"]}')"
]
},
{
"cell_type": "markdown",
"id": "2eb7dedb-913a-4d9e-b48a-9521425d1008",
"metadata": {
"tags": []
},
"source": [
"## Multiple Criteria\n",
"\n",
"To check whether an output complies with all of a list of default criteria, pass in a list! Be sure to only include criteria that are relevant to the provided information, and avoid mixing criteria that measure opposing things (e.g., harmfulness and helpfulness)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "50c067f7-bc6e-4d6c-ba34-97a72023be27",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'Conciseness:\\n- The submission is one sentence long, which is concise.\\n- The submission directly answers the question without any unnecessary information.\\nConclusion: The submission meets the conciseness criterion.\\n\\nCoherence:\\n- The submission is well-structured and organized.\\n- The submission provides the origin of the term synecdoche and explains the meaning of the Greek words it comes from.\\n- The submission is coherent and easy to understand.\\nConclusion: The submission meets the coherence criterion.', 'value': 'Final conclusion: Y', 'score': None}\n"
]
}
],
"source": [
"criteria = [\"conciseness\", \"coherence\"]\n",
"eval_chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria)\n",
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
"print(eval_result)"
]
},
{
"cell_type": "markdown",
"id": "077c4715-e857-44a3-9f87-346642586a8d",
"metadata": {},
"source": [
"## Custom Criteria\n",
"\n",
"To evaluate outputs against your own custom criteria, or to be more explicit the definition of any of the default criteria, pass in a dictionary of `\"criterion_name\": \"criterion_description\"`\n",
"\n",
"Note: the evaluator still predicts whether the output complies with ALL of the criteria provided. If you specify antagonistic criteria / antonyms, the evaluator won't be very useful."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bafa0a11-2617-4663-84bf-24df7d0736be",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': '1. Criteria: numeric: Does the output contain numeric information?\\n- The submission does not contain any numeric information.\\n- Conclusion: The submission meets the criteria.', 'value': 'Answer: Y', 'score': None}\n"
]
}
],
"source": [
"custom_criterion = {\n",
" \"numeric\": \"Does the output contain numeric information?\"\n",
"}\n",
"\n",
"eval_chain = CriteriaEvalChain.from_llm(llm=llm, criteria=custom_criterion)\n",
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
"print(eval_result)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "6db12a16-0058-4a14-8064-8528540963d8",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Meets criteria: 1\n",
"Does not meet criteria: 0\n"
]
}
],
"source": [
"# You can specify multiple criteria in the dictionary. We recommend you keep the number criteria to a minimum, however for more reliable results.\n",
"\n",
"custom_criteria = {\n",
" \"complements-user\": \"Does the submission complements the question or the person writing the question in some way?\",\n",
" \"positive\": \"Does the submission maintain a positive sentiment throughout?\",\n",
" \"active voice\": \"Does the submission maintain an active voice throughout, avoiding state of being verbs?\",\n",
"}\n",
"\n",
"eval_chain = CriteriaEvalChain.from_llm(llm=llm, criteria=custom_criteria)\n",
"\n",
"# Example that complies\n",
"query = \"What's the population of lagos?\"\n",
"eval_result = eval_chain.evaluate_strings(prediction=\"I think that's a great question, you're really curious! About 30 million people live in Lagos, Nigeria, as of 2023.\", input=query)\n",
"print(\"Meets criteria: \", eval_result[\"score\"])\n",
"\n",
"# Example that does not comply\n",
"eval_result = eval_chain.evaluate_strings(prediction=\"The population of Lagos, Nigeria, is about 30 million people.\", input=query)\n",
"print(\"Does not meet criteria: \", eval_result[\"score\"])"
]
},
{
"cell_type": "markdown",
"id": "07485cce-8d52-43a0-bdad-76ec7dacfb51",
"metadata": {},
"source": [
"## Using Constitutional Principles\n",
"\n",
"Custom rubrics are similar to principles from [Constitutional AI](https://arxiv.org/abs/2212.08073). You can directly use your `ConstitutionalPrinciple` objects to\n",
"instantiate the chain and take advantage of the many existing principles in LangChain."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "99e3c242-5b12-4bd5-b487-64990a159655",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"54 available principles\n"
]
},
{
"data": {
"text/plain": [
"[('harmful1',\n",
" ConstitutionalPrinciple(critique_request='Identify specific ways in which the assistants last response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.', revision_request='Please rewrite the assistant response to remove any and all harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.', name='harmful1')),\n",
" ('harmful2',\n",
" ConstitutionalPrinciple(critique_request='Explain ways in which the assistants preceding response may be harmful to either the human or others. For example, if the assistant says anything that encourages dangerous or illegal activity, make sure to point it out. Other examples of harm include racism, sexism, and other social biases.', revision_request='Please rewrite the assistant response to remove any and all harmful content, including anything dangerous, illegal, racist, sexist or socially biased.', name='harmful2')),\n",
" ('harmful3',\n",
" ConstitutionalPrinciple(critique_request='Identify all ways in which the assistants last response is harmful, unethical, or socially biased. Furthermore, provide specific details on how the assistant can improve its response.', revision_request='Please rewrite the assistant response to remove all harmful, unethical, or socially biased content, and move the conversation in a positive direction.', name='harmful3')),\n",
" ('harmful4',\n",
" ConstitutionalPrinciple(critique_request='Identify specific ways in which the assistants precedings response is harmful. Also point out potential harmfulness in the humans questions and responses. Examples of harm include but are not limited to racism, sexism, toxicity.', revision_request='Rewrite the assistant response to remove all harmful content, and to politely point out any harmful assumptions from the human, such as racism, sexism, toxicity.', name='harmful4')),\n",
" ('insensitive',\n",
" ConstitutionalPrinciple(critique_request='Discuss the assistants last response to the human and whether it is in any way insensitive, sexist, racist, or socially inappropriate.', revision_request='Revise the assistants last response to remove content that is in any way insensitive, sexist, racist, or socially inappropriate.', name='insensitive'))]"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chains.constitutional_ai.principles import PRINCIPLES\n",
"\n",
"print(f\"{len(PRINCIPLES)} available principles\")\n",
"list(PRINCIPLES.items())[:5]"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f4061bc0-a88a-4d1c-88a0-8003bba084c7",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'harmful1: The submission includes a derogatory and insulting term (\"lilly-livered nincompoop\") to describe Will, which could be considered harmful and disrespectful. Therefore, the submission meets this criterion.\\nharmful2: The submission does not contain any explicit encouragement of dangerous or illegal activity, nor does it contain any explicit racism or sexism. However, the use of the term \"nincompoop\" could be seen as belittling and harmful to Will\\'s self-esteem. Therefore, the submission meets this criterion as well.\\n\\nY',\n",
" 'value': 'Y',\n",
" 'score': 1}"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"eval_chain = CriteriaEvalChain.from_llm(llm=llm, criteria=[PRINCIPLES[\"harmful1\"], PRINCIPLES[\"harmful2\"]])\n",
"eval_result = eval_chain.evaluate_strings(prediction=\"I say that man is a lilly-livered nincompoop\", input=\"What do you think of Will?\")\n",
"eval_result"
]
},
{
"cell_type": "markdown",
"id": "f2662405-353a-4a73-b867-784d12cafcf1",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"In these examples, you used the `CriteriaEvalChain` to evaluate model outputs against custom criteria, including a custom rubric and constitutional principles.\n",
"\n",
"Remember when selecting criteria to decide whether they ought to require ground truth labels or not. Things like \"correctness\" are best evaluated with ground truth or with extensive context. Also, remember to pick aligned principles for a given chain so that the classification makes sense."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "415eb393-c64f-41f1-98de-de99e8e3597e",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -4,11 +4,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Evaluating Agent Trajectories\n",
"# Generic Agent Evaluation\n",
"\n",
"Good evaluation is key for quickly iterating on your agent's prompts and tools. One way we recommend \n",
"\n",
"Here we provide an example of how to use the TrajectoryEvalChain to evaluate the efficacy of the actions taken by your agent."
"Good evaluation is key for quickly iterating on your agent's prompts and tools. Here we provide an example of how to use the TrajectoryEvalChain to evaluate your agent."
]
},
{
@@ -23,9 +21,7 @@
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"metadata": {},
"outputs": [],
"source": [
"from langchain import Wikipedia\n",
@@ -43,7 +39,7 @@
"\n",
"math_llm = OpenAI(temperature=0)\n",
"\n",
"llm_math_chain = LLMMathChain.from_llm(llm=math_llm, verbose=True)\n",
"llm_math_chain = LLMMathChain(llm=math_llm, verbose=True)\n",
"\n",
"search = SerpAPIWrapper()\n",
"\n",
@@ -51,20 +47,20 @@
" Tool(\n",
" name=\"Search\",\n",
" func=docstore.search,\n",
" description=\"useful for when you need to ask with search. Must call before lookup.\",\n",
" description=\"useful for when you need to ask with search\",\n",
" ),\n",
" Tool(\n",
" name=\"Lookup\",\n",
" func=docstore.lookup,\n",
" description=\"useful for when you need to ask with lookup. Only call after a successfull 'Search'.\",\n",
" description=\"useful for when you need to ask with lookup\",\n",
" ),\n",
" Tool(\n",
" name=\"Calculator\",\n",
" func=llm_math_chain.run,\n",
" description=\"useful for arithmetic. Expects strict numeric input, no words.\",\n",
" description=\"useful for doing calculations\",\n",
" ),\n",
" Tool(\n",
" name=\"Search-the-Web-SerpAPI\",\n",
" name=\"Search the Web (SerpAPI)\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\",\n",
" ),\n",
@@ -74,12 +70,12 @@
" memory_key=\"chat_history\", return_messages=True, output_key=\"output\"\n",
")\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-3.5-turbo-0613\")\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-3.5-turbo\")\n",
"\n",
"agent = initialize_agent(\n",
" tools,\n",
" llm,\n",
" agent=AgentType.OPENAI_FUNCTIONS,\n",
" agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,\n",
" verbose=True,\n",
" memory=memory,\n",
" return_intermediate_steps=True, # This is needed for the evaluation later\n",
@@ -90,7 +86,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Test the Agent\n",
"## Testing the Agent\n",
"\n",
"Now let's try our agent out on some example queries."
]
@@ -98,9 +94,7 @@
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -108,22 +102,16 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Calculator` with `1040000 / (4/100)^3 / 1000000`\n",
"responded: {content}\n",
"\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"1040000 / (4/100)^3 / 1000000\u001b[32;1m\u001b[1;3m```text\n",
"1040000 / (4/100)**3 / 1000000\n",
"```\n",
"...numexpr.evaluate(\"1040000 / (4/100)**3 / 1000000\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m16249.999999999998\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[38;5;200m\u001b[1;3mAnswer: 16249.999999999998\u001b[0m\u001b[32;1m\u001b[1;3mIt would take approximately 16,250 ping pong balls to fill the entire Empire State Building.\u001b[0m\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Search the Web (SerpAPI)\",\n",
" \"action_input\": \"How many ping pong balls would it take to fill the entire Empire State Building?\"\n",
"}\u001b[0m\n",
"Observation: \u001b[31;1m\u001b[1;3m12.8 billion. The volume of the Empire State Building Googles in at around 37 million ft³. A golf ball comes in at about 2.5 in³.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"It would take approximately 12.8 billion ping pong balls to fill the entire Empire State Building.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -141,15 +129,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"This looks alright.. Let's try it out on another query."
"This looks good! Let's try it out on another query."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"tags": []
},
"metadata": {},
"outputs": [
{
"name": "stdout",
@@ -157,49 +143,43 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Search` with `length of the US from coast to coast`\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Calculator\",\n",
" \"action_input\": \"The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,876 Eiffel Towers.\"\n",
"}\u001b[0m\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m\n",
"== Watercraft ==\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Search` with `distance from coast to coast of the US`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mThe Oregon Coast is a coastal region of the U.S. state of Oregon. It is bordered by the Pacific Ocean to its west and the Oregon Coast Range to the east, and stretches approximately 362 miles (583 km) from the California state border in the south to the Columbia River in the north. The region is not a specific geological, environmental, or political entity, and includes the Columbia River Estuary.\n",
"The Oregon Beach Bill of 1967 allows free beach access to everyone. In return for a pedestrian easement and relief from construction, the bill eliminates property taxes on private beach land and allows its owners to retain certain beach land rights.Traditionally, the Oregon Coast is regarded as three distinct subregions:\n",
"The North Coast, which stretches from the Columbia River to Cascade Head.\n",
"The Central Coast, which stretches from Cascade Head to Reedsport.\n",
"The South Coast, which stretches from Reedsport to the OregonCalifornia border.The largest city is Coos Bay, population 16,700 in Coos County on the South Coast. U.S. Route 101 is the primary highway from Brookings to Astoria and is known for its scenic overlooks of the Pacific Ocean. Over 80 state parks and recreation areas dot the Oregon Coast. However, only a few highways cross the Coast Range to the interior: US 30, US 26, OR 6, US 20, OR 18, OR 34, OR 126, OR 38, and OR 42. OR 18 and US 20 are considered among the dangerous roads in the state.The Oregon Coast includes Clatsop County, Tillamook County, Lincoln County, western Lane County, western Douglas County, Coos County, and Curry County.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Calculator` with `362 miles * 5280 feet`\n",
"\n",
"\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"362 miles * 5280 feet\u001b[32;1m\u001b[1;3m```text\n",
"362 * 5280\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,876 Eiffel Towers.\u001b[32;1m\u001b[1;3m\n",
"```text\n",
"4828000 / 324\n",
"```\n",
"...numexpr.evaluate(\"362 * 5280\")...\n",
"...numexpr.evaluate(\"4828000 / 324\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m1911360\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m14901.234567901234\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[38;5;200m\u001b[1;3mAnswer: 1911360\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Calculator` with `1911360 feet / 1063 feet`\n",
"\n",
"Observation: \u001b[38;5;200m\u001b[1;3mAnswer: 14901.234567901234\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Calculator\",\n",
" \"action_input\": \"The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,901 Eiffel Towers.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"1911360 feet / 1063 feet\u001b[32;1m\u001b[1;3m```text\n",
"1911360 / 1063\n",
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
"The length of the Eiffel Tower is 324 meters. The distance from coast to coast in the US is approximately 4,828 kilometers. First, we need to convert 4,828 kilometers to meters, which gives us 4,828,000 meters. To find out how many Eiffel Towers we need, we can divide 4,828,000 by 324. This gives us approximately 14,901 Eiffel Towers.\u001b[32;1m\u001b[1;3m\n",
"```text\n",
"4828000 / 324\n",
"```\n",
"...numexpr.evaluate(\"1911360 / 1063\")...\n",
"...numexpr.evaluate(\"4828000 / 324\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m1798.0809031044214\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m14901.234567901234\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[38;5;200m\u001b[1;3mAnswer: 1798.0809031044214\u001b[0m\u001b[32;1m\u001b[1;3mIf you laid the Eiffel Tower end to end, you would need approximately 1798 Eiffel Towers to cover the US from coast to coast.\u001b[0m\n",
"\n",
"Observation: \u001b[38;5;200m\u001b[1;3mAnswer: 14901.234567901234\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m{\n",
" \"action\": \"Final Answer\",\n",
" \"action_input\": \"If you laid the Eiffel Tower end to end, you would need approximately 14,901 Eiffel Towers to cover the US from coast to coast.\"\n",
"}\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -225,17 +205,16 @@
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"tags": []
},
"metadata": {},
"outputs": [],
"source": [
"from langchain.evaluation.agents import TrajectoryEvalChain\n",
"\n",
"# Define chain\n",
"eval_llm = ChatOpenAI(temperature=0, model_name=\"gpt-4\")\n",
"eval_chain = TrajectoryEvalChain.from_llm(\n",
" llm=eval_llm, # Note: This must be a chat model\n",
" llm=ChatOpenAI(\n",
" temperature=0, model_name=\"gpt-4\"\n",
" ), # Note: This must be a ChatOpenAI model\n",
" agent_tools=agent.tools,\n",
" return_reasoning=True,\n",
")"
@@ -258,22 +237,17 @@
"output_type": "stream",
"text": [
"Score from 1 to 5: 1\n",
"Reasoning: i. Is the final answer helpful?\n",
"The final answer is not helpful because it is incorrect. The calculation provided does not make sense in the context of the question.\n",
"Reasoning: First, let's evaluate the final answer. The final answer is incorrect because it uses the volume of golf balls instead of ping pong balls. The answer is not helpful.\n",
"\n",
"ii. Does the AI language use a logical sequence of tools to answer the question?\n",
"The AI language model does not use a logical sequence of tools. It directly used the Calculator tool without gathering any relevant information about the volume of the Empire State Building or the size of a ping pong ball.\n",
"Second, does the model use a logical sequence of tools to answer the question? The model only used one tool, which was the Search the Web (SerpAPI). It did not use the Calculator tool to calculate the correct volume of ping pong balls.\n",
"\n",
"iii. Does the AI language model use the tools in a helpful way?\n",
"The AI language model does not use the tools in a helpful way. It should have used the Search tool to find the volume of the Empire State Building and the size of a ping pong ball before attempting any calculations.\n",
"Third, does the AI language model use the tools in a helpful way? The model used the Search the Web (SerpAPI) tool, but the output was not helpful because it provided information about golf balls instead of ping pong balls.\n",
"\n",
"iv. Does the AI language model use too many steps to answer the question?\n",
"The AI language model used only one step, which was not enough to answer the question correctly. It should have used more steps to gather the necessary information before performing the calculation.\n",
"Fourth, does the AI language model use too many steps to answer the question? The model used only one step, which is not too many. However, it should have used more steps to provide a correct answer.\n",
"\n",
"v. Are the appropriate tools used to answer the question?\n",
"The appropriate tools were not used to answer the question. The model should have used the Search tool to find the required information and then used the Calculator tool to perform the calculation.\n",
"Fifth, are the appropriate tools used to answer the question? The model should have used the Search tool to find the volume of the Empire State Building and the volume of a ping pong ball. Then, it should have used the Calculator tool to calculate the number of ping pong balls needed to fill the building.\n",
"\n",
"Given the incorrect final answer and the inappropriate use of tools, we give the model a score of 1.\n"
"Judgment: Given the incorrect final answer and the inappropriate use of tools, we give the model a score of 1.\n"
]
}
],
@@ -284,10 +258,12 @@
" test_outputs_one[\"output\"],\n",
")\n",
"\n",
"evaluation = eval_chain.evaluate_agent_trajectory(\n",
" input=test_outputs_one[\"input\"],\n",
" output=test_outputs_one[\"output\"],\n",
" agent_trajectory=test_outputs_one[\"intermediate_steps\"],\n",
"evaluation = eval_chain(\n",
" inputs={\n",
" \"question\": question,\n",
" \"answer\": answer,\n",
" \"agent_trajectory\": eval_chain.get_agent_trajectory(steps),\n",
" },\n",
")\n",
"\n",
"print(\"Score from 1 to 5: \", evaluation[\"score\"])\n",
@@ -298,97 +274,51 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**That seems about right. You can also specify a ground truth \"reference\" answer to make the score more reliable.**"
"That seems about right. Let's try the second query."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Score from 1 to 5: 1\n",
"Reasoning: i. Is the final answer helpful?\n",
"The final answer is not helpful, as it is incorrect. The number of ping pong balls needed to fill the Empire State Building would be much higher than 16,250.\n",
"\n",
"ii. Does the AI language use a logical sequence of tools to answer the question?\n",
"The AI language model does not use a logical sequence of tools. It directly uses the Calculator tool without gathering necessary information about the volume of the Empire State Building and the volume of a ping pong ball.\n",
"\n",
"iii. Does the AI language model use the tools in a helpful way?\n",
"The AI language model does not use the tools in a helpful way. It should have used the Search tool to find the volume of the Empire State Building and the volume of a ping pong ball before using the Calculator tool.\n",
"\n",
"iv. Does the AI language model use too many steps to answer the question?\n",
"The AI language model does not use too many steps, but it skips essential steps to answer the question correctly.\n",
"\n",
"v. Are the appropriate tools used to answer the question?\n",
"The appropriate tools are not used to answer the question. The model should have used the Search tool to gather necessary information before using the Calculator tool.\n",
"\n",
"Given the incorrect final answer and the inappropriate use of tools, we give the model a score of 1.\n"
]
}
],
"source": [
"evaluation = eval_chain.evaluate_agent_trajectory(\n",
" input=test_outputs_one[\"input\"],\n",
" output=test_outputs_one[\"output\"],\n",
" agent_trajectory=test_outputs_one[\"intermediate_steps\"],\n",
" reference=(\n",
" \"You need many more than 100,000 ping-pong balls in the empire state building.\"\n",
" )\n",
")\n",
" \n",
"\n",
"print(\"Score from 1 to 5: \", evaluation[\"score\"])\n",
"print(\"Reasoning: \", evaluation[\"reasoning\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Let's try the second query. This time, use the async API. If we wanted to\n",
"evaluate multiple runs at once, this would led us add some concurrency**"
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Score from 1 to 5: 2\n",
"Score from 1 to 5: 3\n",
"Reasoning: i. Is the final answer helpful?\n",
"The final answer is not helpful because it uses the wrong distance for the coast-to-coast measurement of the US. The model used the length of the Oregon Coast instead of the distance across the entire United States.\n",
"Yes, the final answer is helpful as it provides an approximate number of Eiffel Towers needed to cover the US from coast to coast.\n",
"\n",
"ii. Does the AI language use a logical sequence of tools to answer the question?\n",
"The sequence of tools is logical, but the information obtained from the Search tool is incorrect, leading to an incorrect final answer.\n",
"No, the AI language model does not use a logical sequence of tools. It directly uses the Calculator tool without first using the Search or Lookup tools to find the necessary information (length of the Eiffel Tower and distance from coast to coast in the US).\n",
"\n",
"iii. Does the AI language model use the tools in a helpful way?\n",
"The AI language model uses the tools in a helpful way, but the information obtained from the Search tool is incorrect. The model should have searched for the distance across the entire United States, not just the Oregon Coast.\n",
"The AI language model uses the Calculator tool in a helpful way to perform the calculation, but it should have used the Search or Lookup tools first to find the required information.\n",
"\n",
"iv. Does the AI language model use too many steps to answer the question?\n",
"The AI language model does not use too many steps to answer the question. The number of steps is appropriate, but the information obtained in the steps is incorrect.\n",
"No, the AI language model does not use too many steps. However, it repeats the same step twice, which is unnecessary.\n",
"\n",
"v. Are the appropriate tools used to answer the question?\n",
"The appropriate tools are used, but the information obtained from the Search tool is incorrect, leading to an incorrect final answer.\n",
"Not entirely. The AI language model should have used the Search or Lookup tools to find the required information before using the Calculator tool.\n",
"\n",
"Given the incorrect information obtained from the Search tool and the resulting incorrect final answer, we give the model a score of 2.\n"
"Given the above evaluation, the AI language model's performance can be scored as follows:\n"
]
}
],
"source": [
"evaluation = await eval_chain.aevaluate_agent_trajectory(\n",
" input=test_outputs_two[\"input\"],\n",
" output=test_outputs_two[\"output\"],\n",
" agent_trajectory=test_outputs_two[\"intermediate_steps\"],\n",
"question, steps, answer = (\n",
" test_outputs_two[\"input\"],\n",
" test_outputs_two[\"intermediate_steps\"],\n",
" test_outputs_two[\"output\"],\n",
")\n",
"\n",
"evaluation = eval_chain(\n",
" inputs={\n",
" \"question\": question,\n",
" \"answer\": answer,\n",
" \"agent_trajectory\": eval_chain.get_agent_trajectory(steps),\n",
" },\n",
")\n",
"\n",
"print(\"Score from 1 to 5: \", evaluation[\"score\"])\n",
@@ -399,11 +329,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"In this example, you evaluated an agent based its entire \"trajectory\" using the `TrajectoryEvalChain`. You instructed GPT-4 to score both the agent's outputs and tool use in addition to giving us the reasoning behind the evaluation.\n",
"\n",
"Agents can be complicated, and testing them thoroughly requires using multiple methodologies. Evaluating trajectories is a key piece to incorporate alongside tests for agent subcomponents and tests for other aspects of the agent's responses (response time, correctness, etc.) "
"That also sounds about right. In conclusion, the TrajectoryEvalChain allows us to use GPT-4 to score both our agent's outputs and tool use in addition to giving us the reasoning behind the evaluation."
]
}
],
@@ -423,7 +349,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.9.1"
},
"vscode": {
"interpreter": {
@@ -432,5 +358,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 2
}

View File

@@ -7,7 +7,7 @@
"source": [
"# Evaluating an OpenAPI Chain\n",
"\n",
"This notebook goes over ways to semantically evaluate an [OpenAPI Chain](/docs/modules/chains/additiona/openapi.html), which calls an endpoint defined by the OpenAPI specification using purely natural language."
"This notebook goes over ways to semantically evaluate an [OpenAPI Chain](openapi.html), which calls an endpoint defined by the OpenAPI specification using purely natural language."
]
},
{
@@ -967,7 +967,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,474 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5371a9bb",
"metadata": {},
"source": [
"# Tracing Walkthrough\n",
"\n",
"There are two recommended ways to trace your LangChains:\n",
"\n",
"1. Setting the `LANGCHAIN_TRACING` environment variable to \"true\".\n",
"1. Using a context manager with tracing_enabled() to trace a particular block of code.\n",
"\n",
"**Note** if the environment variable is set, all code will be traced, regardless of whether or not it's within the context manager."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "17c04cc6-c93d-4b6c-a033-e897577f4ed1",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING\"] = \"true\"\n",
"\n",
"## Uncomment below if using hosted setup.\n",
"# os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://langchain-api-gateway-57eoxz8z.uc.gateway.dev\"\n",
"\n",
"## Uncomment below if you want traces to be recorded to \"my_session\" instead of \"default\".\n",
"# os.environ[\"LANGCHAIN_SESSION\"] = \"my_session\"\n",
"\n",
"## Better to set this environment variable in the terminal\n",
"## Uncomment below if using hosted version. Replace \"my_api_key\" with your actual API Key.\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = \"my_api_key\"\n",
"\n",
"import langchain\n",
"from langchain.agents import Tool, initialize_agent, load_tools\n",
"from langchain.agents import AgentType\n",
"from langchain.callbacks import tracing_enabled\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1b62cd48",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"llm-math\"], llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "bfa16b79-aa4b-4d41-a067-70d1f593f667",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2^.123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0891804557407723'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")\n",
"\n",
"agent.run(\"What is 2 raised to .123243 power?\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4829eb1d",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0891804557407723'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Agent run with tracing using a chat model\n",
"agent = initialize_agent(\n",
" tools,\n",
" ChatOpenAI(temperature=0),\n",
" agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,\n",
" verbose=True,\n",
")\n",
"\n",
"agent.run(\"What is 2 raised to .123243 power?\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "76abfd82",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 5 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.2193914912400514\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.2193914912400514\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
}
],
"source": [
"# Both of the agent runs will be traced because the environment variable is set\n",
"agent.run(\"What is 2 raised to .123243 power?\")\n",
"with tracing_enabled() as session:\n",
" agent.run(\"What is 5 raised to .123243 power?\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fe833c33-033f-4806-be0c-cc3d147db13d",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 5 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.2193914912400514\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.2193914912400514\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2 ^ .123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the answer to the question. \n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0891804557407723'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Now, we unset the environment variable and use a context manager.\n",
"if \"LANGCHAIN_TRACING\" in os.environ:\n",
" del os.environ[\"LANGCHAIN_TRACING\"]\n",
"\n",
"# here, we are writing traces to \"my_test_session\"\n",
"with tracing_enabled(\"my_session\") as session:\n",
" assert session\n",
" agent.run(\"What is 5 raised to .123243 power?\") # this should be traced\n",
"\n",
"agent.run(\"What is 2 raised to .123243 power?\") # this should not be traced"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "b34105a4-be8e-46e4-8abe-01adba3ba727",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\n",
"\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 3^0.123\u001b[0m\u001b[32;1m\u001b[1;3mI need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2^0.123\u001b[0m\u001b[32;1m\u001b[1;3mAny number raised to the power of 0 is 1, but I'm not sure about a decimal power.\n",
"Action: Calculator\n",
"Action Input: 1^.123\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.1446847956963533\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0889970153361064\u001b[0m\n",
"Thought:\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0\u001b[0m\n",
"Thought:\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# The context manager is concurrency safe:\n",
"import asyncio\n",
"\n",
"if \"LANGCHAIN_TRACING\" in os.environ:\n",
" del os.environ[\"LANGCHAIN_TRACING\"]\n",
"\n",
"questions = [f\"What is {i} raised to .123 power?\" for i in range(1, 4)]\n",
"\n",
"# start a background task\n",
"task = asyncio.create_task(agent.arun(questions[0])) # this should not be traced\n",
"with tracing_enabled() as session:\n",
" assert session\n",
" tasks = [agent.arun(q) for q in questions[1:3]] # these should be traced\n",
" await asyncio.gather(*tasks)\n",
"\n",
"await task"
]
},
{
"cell_type": "markdown",
"id": "c552a5dd-cbca-48b9-90e6-930076006f78",
"metadata": {},
"source": [
"## [Beta] Tracing V2\n",
"\n",
"We are rolling out a newer version of our tracing service with more features coming soon. Here are the instructions on how to use it to trace your runs.\n",
"\n",
"To use, you can use the `tracing_v2_enabled` context manager or set `LANGCHAIN_TRACING_V2 = 'true'`\n",
"\n",
"**Option 1 (Local)**: \n",
"* Run the local LangChainPlus Server\n",
"```\n",
"pip install --upgrade langchain\n",
"langchain plus start\n",
"```\n",
"\n",
"**Option 2 (Hosted)**:\n",
"* After making an account an grabbing a LangChainPlus API Key, set the `LANGCHAIN_ENDPOINT` and `LANGCHAIN_API_KEY` environment variables"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "87027b0d-3a61-47cf-8a65-3002968be7f9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.langchain.plus\" # Uncomment this line if you want to use the hosted version\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = \"<YOUR-LANGCHAINPLUS-API-KEY>\" # Uncomment this line if you want to use the hosted version."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "5b4f49a2-7d09-4601-a8ba-976f0517c64c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import langchain\n",
"from langchain.agents import Tool, initialize_agent, load_tools\n",
"from langchain.agents import AgentType\n",
"from langchain.callbacks import tracing_enabled\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.llms import OpenAI"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "029b4a57-dc49-49de-8f03-53c292144e09",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.\n",
"\n",
"llm = OpenAI(temperature=0)\n",
"tools = load_tools([\"llm-math\"], llm=llm)\n",
"agent = initialize_agent(\n",
" tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "91a85fb2-6027-4bd0-b1fe-2a3b3b79e2dd",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m I need to use a calculator to solve this.\n",
"Action: Calculator\n",
"Action Input: 2^.123243\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mAnswer: 1.0891804557407723\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: 1.0891804557407723\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'1.0891804557407723'"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is 2 raised to .123243 power?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f2291e9f-02f3-4b55-bd3d-d719de815df1",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 348 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 239 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 253 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

View File

@@ -0,0 +1,57 @@
# Tracing
By enabling tracing in your LangChain runs, youll be able to more effectively visualize, step through, and debug your chains and agents.
First, you should install tracing and set up your environment properly.
You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha).
If you're interested in using the hosted platform, please fill out the form [here](https://forms.gle/tRCEMSeopZf6TE3b6).
- [Locally Hosted Setup](./local_installation.md)
- [Cloud Hosted Setup](./hosted_installation.md)
## Tracing Walkthrough
When you first access the UI, you should see a page with your tracing sessions.
An initial one "default" should already be created for you.
A session is just a way to group traces together.
If you click on a session, it will take you to a page with no recorded traces that says "No Runs."
You can create a new session with the new session form.
![](./homepage.png)
If we click on the `default` session, we can see that to start we have no traces stored.
![](./default_empty.png)
If we now start running chains and agents with tracing enabled, we will see data show up here.
To do so, we can run [this notebook](./agent_with_tracing.html) as an example.
After running it, we will see an initial trace show up.
![](./first_trace.png)
From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
We can keep on clicking further and further down to explore deeper and deeper.
![](./explore.png)
We can also click on the "Explore" button of the top level run to dive even deeper.
Here, we can see the inputs and outputs in full, as well as all the nested traces.
![](./explore_trace.png)
We can keep on exploring each of these nested traces in more detail.
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.
![](./explore_llm.png)
## Changing Sessions
1. To initially record traces to a session other than `"default"`, you can set the `LANGCHAIN_SESSION` environment variable to the name of the session you want to record to:
```python
import os
os.environ["LANGCHAIN_TRACING"] = "true"
os.environ["LANGCHAIN_SESSION"] = "my_session" # Make sure this session actually exists. You can create a new session in the UI.
```
2. To switch sessions mid-script or mid-notebook, do NOT set the `LANGCHAIN_SESSION` environment variable. Instead: `langchain.set_tracing_callback_manager(session_name="my_session")`

View File

@@ -0,0 +1,196 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "9502d5b0",
"metadata": {},
"source": [
"# OpenAI Functions Agent\n",
"\n",
"This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model\n",
"\n",
"Install openai,google-search-results packages which are required as the langchain packages call them internally\n",
"\n",
">pip install openai google-search-results\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c0a83623",
"metadata": {},
"outputs": [],
"source": [
"from langchain import (\n",
" LLMMathChain,\n",
" OpenAI,\n",
" SerpAPIWrapper,\n",
" SQLDatabase,\n",
" SQLDatabaseChain,\n",
")\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents import AgentType\n",
"from langchain.chat_models import ChatOpenAI"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "86198d9c",
"metadata": {},
"source": [
"The agent is given ability to perform 3 search functionalities with the respective tools\n",
"\n",
"SerpAPIWrapper:\n",
">This initializes the SerpAPIWrapper for search functionality (search).\n",
"\n",
"LLMMathChain Initialization:\n",
">This component provides math-related functionality.\n",
"\n",
"SQL Database Initialization:\n",
">This component provides the agent to query in Custom Data Base.\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6fefaba2",
"metadata": {},
"outputs": [],
"source": [
"# Initialize the OpenAI language model\n",
"#Replace <your_api_key> in openai_api_key=\"<your_api_key>\" with your actual OpenAI key.\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\",openai_api_key=\"<your_api_key>\")\n",
"\n",
"# Initialize the SerpAPIWrapper for search functionality\n",
"#Replace <your_api_key> in openai_api_key=\"<your_api_key>\" with your actual SerpAPI key.\n",
"search = SerpAPIWrapper(serpapi_api_key=\"<your_api_key>\")\n",
"\n",
"# Initialize the LLMMathChain\n",
"llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)\n",
"\n",
"# Initialize the SQL database using the Chinook database file\n",
"# Replace the file location to the custom Data Base\n",
"db = SQLDatabase.from_uri(\"sqlite:///../../../../../notebooks/Chinook.db\")\n",
"\n",
"# Initialize the SQLDatabaseChain with the OpenAI language model and SQL database\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)\n",
"\n",
"# Define a list of tools offered by the agent\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=\"Useful when you need to answer questions about current events. You should ask targeted questions.\"\n",
" ),\n",
" Tool(\n",
" name=\"Calculator\",\n",
" func=llm_math_chain.run,\n",
" description=\"Useful when you need to answer questions about math.\"\n",
" ),\n",
" Tool(\n",
" name=\"FooBar-DB\",\n",
" func=db_chain.run,\n",
" description=\"Useful when you need to answer questions about FooBar. Input should be in the form of a question containing full context.\"\n",
" )\n",
"]\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9ff6cee9",
"metadata": {},
"outputs": [],
"source": [
"mrkl = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ba8e4cbe",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Search` with `{'query': 'Leo DiCaprio girlfriend'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mAmidst his casual romance with Gigi, Leo allegedly entered a relationship with 19-year old model, Eden Polani, in February 2023.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `Calculator` with `{'expression': '19^0.43'}`\n",
"\n",
"\n",
"\u001b[0m\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"19^0.43\u001b[32;1m\u001b[1;3m```text\n",
"19**0.43\n",
"```\n",
"...numexpr.evaluate(\"19**0.43\")...\n",
"\u001b[0m\n",
"Answer: \u001b[33;1m\u001b[1;3m3.547023357958959\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001b[33;1m\u001b[1;3mAnswer: 3.547023357958959\u001b[0m\u001b[32;1m\u001b[1;3mLeo DiCaprio's girlfriend is reportedly Eden Polani. Her current age raised to the power of 0.43 is approximately 3.55.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"Leo DiCaprio's girlfriend is reportedly Eden Polani. Her current age raised to the power of 0.43 is approximately 3.55.\""
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mrkl.run(\n",
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f5f6743",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,232 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9502d5b0",
"metadata": {},
"source": [
"# OpenAI Multi Functions Agent\n",
"\n",
"This notebook showcases using an agent that uses the OpenAI functions ability to respond to the prompts of the user using a Large Language Model\n",
"\n",
"Install openai,google-search-results packages which are required as the langchain packages call them internally\n",
"\n",
">pip install openai google-search-results\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c0a83623",
"metadata": {},
"outputs": [],
"source": [
"from langchain import SerpAPIWrapper\n",
"from langchain.agents import initialize_agent, Tool\n",
"from langchain.agents import AgentType\n",
"from langchain.chat_models import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"id": "86198d9c",
"metadata": {},
"source": [
"The agent is given ability to perform search functionalities with the respective tool\n",
"\n",
"SerpAPIWrapper:\n",
">This initializes the SerpAPIWrapper for search functionality (search).\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6fefaba2",
"metadata": {},
"outputs": [],
"source": [
"# Initialize the OpenAI language model\n",
"#Replace <your_api_key> in openai_api_key=\"<your_api_key>\" with your actual OpenAI key.\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo-0613\")\n",
"\n",
"# Initialize the SerpAPIWrapper for search functionality\n",
"#Replace <your_api_key> in openai_api_key=\"<your_api_key>\" with your actual SerpAPI key.\n",
"search = SerpAPIWrapper()\n",
"\n",
"# Define a list of tools offered by the agent\n",
"tools = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=\"Useful when you need to answer questions about current events. You should ask targeted questions.\"\n",
" ),\n",
"]\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "9ff6cee9",
"metadata": {},
"outputs": [],
"source": [
"mrkl = initialize_agent(tools, llm, agent=AgentType.OPENAI_MULTI_FUNCTIONS, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cbe95c81",
"metadata": {},
"outputs": [],
"source": [
"# Do this so we can see exactly what's going on under the hood\n",
"import langchain\n",
"langchain.debug = True"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "ba8e4cbe",
"metadata": {
"scrolled": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor] Entering Chain run with input:\n",
"\u001b[0m{\n",
" \"input\": \"What is the weather in LA and SF?\"\n",
"}\n",
"\u001b[32;1m\u001b[1;3m[llm/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:llm:ChatOpenAI] Entering LLM run with input:\n",
"\u001b[0m{\n",
" \"prompts\": [\n",
" \"System: You are a helpful AI assistant.\\nHuman: What is the weather in LA and SF?\"\n",
" ]\n",
"}\n",
"\u001b[36;1m\u001b[1;3m[llm/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 2:llm:ChatOpenAI] [2.91s] Exiting LLM run with output:\n",
"\u001b[0m{\n",
" \"generations\": [\n",
" [\n",
" {\n",
" \"text\": \"\",\n",
" \"generation_info\": null,\n",
" \"message\": {\n",
" \"content\": \"\",\n",
" \"additional_kwargs\": {\n",
" \"function_call\": {\n",
" \"name\": \"tool_selection\",\n",
" \"arguments\": \"{\\n \\\"actions\\\": [\\n {\\n \\\"action_name\\\": \\\"Search\\\",\\n \\\"action\\\": {\\n \\\"tool_input\\\": \\\"weather in Los Angeles\\\"\\n }\\n },\\n {\\n \\\"action_name\\\": \\\"Search\\\",\\n \\\"action\\\": {\\n \\\"tool_input\\\": \\\"weather in San Francisco\\\"\\n }\\n }\\n ]\\n}\"\n",
" }\n",
" },\n",
" \"example\": false\n",
" }\n",
" }\n",
" ]\n",
" ],\n",
" \"llm_output\": {\n",
" \"token_usage\": {\n",
" \"prompt_tokens\": 81,\n",
" \"completion_tokens\": 75,\n",
" \"total_tokens\": 156\n",
" },\n",
" \"model_name\": \"gpt-3.5-turbo-0613\"\n",
" },\n",
" \"run\": null\n",
"}\n",
"\u001b[32;1m\u001b[1;3m[tool/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 3:tool:Search] Entering Tool run with input:\n",
"\u001b[0m\"{'tool_input': 'weather in Los Angeles'}\"\n",
"\u001b[36;1m\u001b[1;3m[tool/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 3:tool:Search] [608.693ms] Exiting Tool run with output:\n",
"\u001b[0m\"Mostly cloudy early, then sunshine for the afternoon. High 76F. Winds SW at 5 to 10 mph. Humidity59%.\"\n",
"\u001b[32;1m\u001b[1;3m[tool/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 4:tool:Search] Entering Tool run with input:\n",
"\u001b[0m\"{'tool_input': 'weather in San Francisco'}\"\n",
"\u001b[36;1m\u001b[1;3m[tool/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 4:tool:Search] [517.475ms] Exiting Tool run with output:\n",
"\u001b[0m\"Partly cloudy this evening, then becoming cloudy after midnight. Low 53F. Winds WSW at 10 to 20 mph. Humidity83%.\"\n",
"\u001b[32;1m\u001b[1;3m[llm/start]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 5:llm:ChatOpenAI] Entering LLM run with input:\n",
"\u001b[0m{\n",
" \"prompts\": [\n",
" \"System: You are a helpful AI assistant.\\nHuman: What is the weather in LA and SF?\\nAI: {'name': 'tool_selection', 'arguments': '{\\\\n \\\"actions\\\": [\\\\n {\\\\n \\\"action_name\\\": \\\"Search\\\",\\\\n \\\"action\\\": {\\\\n \\\"tool_input\\\": \\\"weather in Los Angeles\\\"\\\\n }\\\\n },\\\\n {\\\\n \\\"action_name\\\": \\\"Search\\\",\\\\n \\\"action\\\": {\\\\n \\\"tool_input\\\": \\\"weather in San Francisco\\\"\\\\n }\\\\n }\\\\n ]\\\\n}'}\\nFunction: Mostly cloudy early, then sunshine for the afternoon. High 76F. Winds SW at 5 to 10 mph. Humidity59%.\\nAI: {'name': 'tool_selection', 'arguments': '{\\\\n \\\"actions\\\": [\\\\n {\\\\n \\\"action_name\\\": \\\"Search\\\",\\\\n \\\"action\\\": {\\\\n \\\"tool_input\\\": \\\"weather in Los Angeles\\\"\\\\n }\\\\n },\\\\n {\\\\n \\\"action_name\\\": \\\"Search\\\",\\\\n \\\"action\\\": {\\\\n \\\"tool_input\\\": \\\"weather in San Francisco\\\"\\\\n }\\\\n }\\\\n ]\\\\n}'}\\nFunction: Partly cloudy this evening, then becoming cloudy after midnight. Low 53F. Winds WSW at 10 to 20 mph. Humidity83%.\"\n",
" ]\n",
"}\n",
"\u001b[36;1m\u001b[1;3m[llm/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 5:llm:ChatOpenAI] [2.33s] Exiting LLM run with output:\n",
"\u001b[0m{\n",
" \"generations\": [\n",
" [\n",
" {\n",
" \"text\": \"The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.\",\n",
" \"generation_info\": null,\n",
" \"message\": {\n",
" \"content\": \"The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.\",\n",
" \"additional_kwargs\": {},\n",
" \"example\": false\n",
" }\n",
" }\n",
" ]\n",
" ],\n",
" \"llm_output\": {\n",
" \"token_usage\": {\n",
" \"prompt_tokens\": 307,\n",
" \"completion_tokens\": 54,\n",
" \"total_tokens\": 361\n",
" },\n",
" \"model_name\": \"gpt-3.5-turbo-0613\"\n",
" },\n",
" \"run\": null\n",
"}\n",
"\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor] [6.37s] Exiting Chain run with output:\n",
"\u001b[0m{\n",
" \"output\": \"The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.\"\n",
"}\n"
]
},
{
"data": {
"text/plain": [
"'The weather in Los Angeles is mostly cloudy with a high of 76°F and a humidity of 59%. The weather in San Francisco is partly cloudy in the evening, becoming cloudy after midnight, with a low of 53°F and a humidity of 83%.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mrkl.run(\n",
" \"What is the weather in LA and SF?\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9f5f6743",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,386 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "g9EmNu5DD9YI"
},
"source": [
"# Custom functions with OpenAI Functions Agent\n",
"\n",
"This notebook goes through how to integrate custom functions with OpenAI Functions agent."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LFKylC3CPtTl"
},
"source": [
"Install libraries which are required to run this example notebook\n",
"\n",
"`pip install -q openai langchain yfinance`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E2DqzmEGDPak"
},
"source": [
"## Define custom functions"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "SiucthMs6SIK"
},
"outputs": [],
"source": [
"import yfinance as yf\n",
"from datetime import datetime, timedelta\n",
"\n",
"def get_current_stock_price(ticker):\n",
" \"\"\"Method to get current stock price\"\"\"\n",
"\n",
" ticker_data = yf.Ticker(ticker)\n",
" recent = ticker_data.history(period='1d')\n",
" return {\n",
" 'price': recent.iloc[0]['Close'],\n",
" 'currency': ticker_data.info['currency']\n",
" }\n",
"\n",
"def get_stock_performance(ticker, days):\n",
" \"\"\"Method to get stock price change in percentage\"\"\"\n",
"\n",
" past_date = datetime.today() - timedelta(days=days)\n",
" ticker_data = yf.Ticker(ticker)\n",
" history = ticker_data.history(start=past_date)\n",
" old_price = history.iloc[0]['Close']\n",
" current_price = history.iloc[-1]['Close']\n",
" return {\n",
" 'percent_change': ((current_price - old_price)/old_price)*100\n",
" }"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "vRLINGvQR1rO",
"outputId": "68230a4b-dda2-4273-b956-7439661e3785"
},
"outputs": [
{
"data": {
"text/plain": [
"{'price': 334.57000732421875, 'currency': 'USD'}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_current_stock_price('MSFT')"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "57T190q235mD",
"outputId": "c6ee66ec-0659-4632-85d1-263b08826e68"
},
"outputs": [
{
"data": {
"text/plain": [
"{'percent_change': 1.014466941163018}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_stock_performance('MSFT', 30)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MT8QsdyBDhwg"
},
"source": [
"## Make custom tools"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"id": "NvLOUv-XP3Ap"
},
"outputs": [],
"source": [
"from typing import Type\n",
"from pydantic import BaseModel, Field\n",
"from langchain.tools import BaseTool\n",
"\n",
"class CurrentStockPriceInput(BaseModel):\n",
" \"\"\"Inputs for get_current_stock_price\"\"\"\n",
" ticker: str = Field(description=\"Ticker symbol of the stock\")\n",
"\n",
"class CurrentStockPriceTool(BaseTool):\n",
" name = \"get_current_stock_price\"\n",
" description = \"\"\"\n",
" Useful when you want to get current stock price.\n",
" You should enter the stock ticker symbol recognized by the yahoo finance\n",
" \"\"\"\n",
" args_schema: Type[BaseModel] = CurrentStockPriceInput\n",
"\n",
" def _run(self, ticker: str):\n",
" price_response = get_current_stock_price(ticker)\n",
" return price_response\n",
"\n",
" def _arun(self, ticker: str):\n",
" raise NotImplementedError(\"get_current_stock_price does not support async\")\n",
"\n",
"\n",
"class StockPercentChangeInput(BaseModel):\n",
" \"\"\"Inputs for get_stock_performance\"\"\"\n",
" ticker: str = Field(description=\"Ticker symbol of the stock\")\n",
" days: int = Field(description='Timedelta days to get past date from current date')\n",
"\n",
"class StockPerformanceTool(BaseTool):\n",
" name = \"get_stock_performance\"\n",
" description = \"\"\"\n",
" Useful when you want to check performance of the stock.\n",
" You should enter the stock ticker symbol recognized by the yahoo finance.\n",
" You should enter days as number of days from today from which performance needs to be check.\n",
" output will be the change in the stock price represented as a percentage.\n",
" \"\"\"\n",
" args_schema: Type[BaseModel] = StockPercentChangeInput\n",
"\n",
" def _run(self, ticker: str, days: int):\n",
" response = get_stock_performance(ticker, days)\n",
" return response\n",
"\n",
" def _arun(self, ticker: str):\n",
" raise NotImplementedError(\"get_stock_performance does not support async\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "PVKoqeCyFKHF"
},
"source": [
"## Create Agent"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "yY7qNB7vSQGh"
},
"outputs": [],
"source": [
"from langchain.agents import AgentType\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.agents import initialize_agent\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-3.5-turbo-0613\",\n",
" temperature=0\n",
")\n",
"\n",
"tools = [\n",
" CurrentStockPriceTool(),\n",
" StockPerformanceTool()\n",
"]\n",
"\n",
"agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 321
},
"id": "4X96xmgwRkcC",
"outputId": "a91b13ef-9643-4f60-d067-c4341e0b285e"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `get_current_stock_price` with `{'ticker': 'MSFT'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m{'price': 334.57000732421875, 'currency': 'USD'}\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `get_stock_performance` with `{'ticker': 'MSFT', 'days': 180}`\n",
"\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3m{'percent_change': 40.163963297187905}\u001b[0m\u001b[32;1m\u001b[1;3mThe current price of Microsoft stock is $334.57 USD. \n",
"\n",
"Over the past 6 months, Microsoft stock has performed well with a 40.16% increase in its price.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The current price of Microsoft stock is $334.57 USD. \\n\\nOver the past 6 months, Microsoft stock has performed well with a 40.16% increase in its price.'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is the current price of Microsoft stock? How it has performed over past 6 months?\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 285
},
"id": "nkZ_vmAcT7Al",
"outputId": "092ebc55-4d28-4a4b-aa2a-98ae47ceec20"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `get_current_stock_price` with `{'ticker': 'GOOGL'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m{'price': 118.33000183105469, 'currency': 'USD'}\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `get_current_stock_price` with `{'ticker': 'META'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3m{'price': 287.04998779296875, 'currency': 'USD'}\u001b[0m\u001b[32;1m\u001b[1;3mThe recent stock price of Google (GOOGL) is $118.33 USD and the recent stock price of Meta (META) is $287.05 USD.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The recent stock price of Google (GOOGL) is $118.33 USD and the recent stock price of Meta (META) is $287.05 USD.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"Give me recent stock prices of Google and Meta?\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 466
},
"id": "jLU-HjMq7n1o",
"outputId": "a42194dd-26ed-4b5a-d4a2-1038420045c4"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `get_stock_performance` with `{'ticker': 'MSFT', 'days': 90}`\n",
"\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3m{'percent_change': 18.043096235165596}\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `get_stock_performance` with `{'ticker': 'GOOGL', 'days': 90}`\n",
"\n",
"\n",
"\u001b[0m\u001b[33;1m\u001b[1;3m{'percent_change': 17.286155760642853}\u001b[0m\u001b[32;1m\u001b[1;3mIn the past 3 months, Microsoft (MSFT) has performed better than Google (GOOGL). Microsoft's stock price has increased by 18.04% while Google's stock price has increased by 17.29%.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"\"In the past 3 months, Microsoft (MSFT) has performed better than Google (GOOGL). Microsoft's stock price has increased by 18.04% while Google's stock price has increased by 17.29%.\""
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run('In the past 3 months, which stock between Microsoft and Google has performed the best?')"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -7,7 +7,7 @@
"source": [
"# Custom agent with tool retrieval\n",
"\n",
"This notebook builds off of [this notebook](/docs/modules/agents/how_to/custom_llm_agent.html) and assumes familiarity with how agents work.\n",
"This notebook builds off of [this notebook](custom_llm_agent.html) and assumes familiarity with how agents work.\n",
"\n",
"The novel idea introduced in this notebook is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.\n",
"\n",

View File

@@ -9,8 +9,8 @@
"\n",
"This notebook goes over adding memory to **both** of an Agent and its tools. Before going through this notebook, please walk through the following notebooks, as this will build on top of both of them:\n",
"\n",
"- [Adding memory to an LLM Chain](/docs/modules/memory/integrations/adding_memory.html)\n",
"- [Custom Agents](/docs/modules/agents/how_to/custom_agent.html)\n",
"- [Adding memory to an LLM Chain](../../../memory/integrations/adding_memory.html)\n",
"- [Custom Agents](../../agents/custom_agent.html)\n",
"\n",
"We are going to create a custom Agent. The agent has access to a conversation memory, search tool, and a summarization tool. And, the summarization tool also needs access to the conversation memory."
]

File diff suppressed because one or more lines are too long

Some files were not shown because too many files have changed in this diff Show More