Merge branch 'master' into tjaffri/docugami_loader_source
@ -15,7 +15,11 @@ You may use the button above, or follow these steps to open this repo in a Codes
|
||||
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
|
||||
|
||||
## VS Code Dev Containers
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
|
||||
|
||||
Note: If you click this link you will open the main repo and not your local cloned repo, you can use this link and replace with your username and cloned repo name:
|
||||
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
|
||||
|
||||
|
||||
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
|
||||
|
||||
@ -25,7 +29,7 @@ You can also follow these steps to open this repo in a container using the VS Co
|
||||
|
||||
2. Open a locally cloned copy of the code:
|
||||
|
||||
- Clone this repository to your local filesystem.
|
||||
- Fork and Clone this repository to your local filesystem.
|
||||
- Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command.
|
||||
- Select the cloned copy of this folder, wait for the container to start, and try things out!
|
||||
|
||||
|
1
.github/workflows/_release.yml
vendored
@ -37,6 +37,7 @@ jobs:
|
||||
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
|
||||
- name: Create Release
|
||||
uses: ncipollo/release-action@v1
|
||||
if: ${{ inputs.working-directory == 'libs/langchain' }}
|
||||
with:
|
||||
artifacts: "dist/*"
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
2
.github/workflows/codespell.yml
vendored
@ -20,3 +20,5 @@ jobs:
|
||||
uses: actions/checkout@v3
|
||||
- name: Codespell
|
||||
uses: codespell-project/actions-codespell@v2
|
||||
with:
|
||||
skip: guide_imports.json
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
name: libs/langchain-experimental CI
|
||||
name: libs/experimental CI
|
||||
|
||||
on:
|
||||
push:
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
name: libs/langchain-experimental Release
|
||||
name: libs/experimental Release
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
|
38
.github/workflows/scheduled_test.yml
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
name: Scheduled tests
|
||||
|
||||
on:
|
||||
scheduled:
|
||||
- cron: '0 13 * * *'
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.4.2"
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
environment: Scheduled testing
|
||||
strategy:
|
||||
matrix:
|
||||
python-version:
|
||||
- "3.8"
|
||||
- "3.9"
|
||||
- "3.10"
|
||||
- "3.11"
|
||||
name: Python ${{ matrix.python-version }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: "./.github/actions/poetry_setup"
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
poetry-version: "1.4.2"
|
||||
install-command: |
|
||||
echo "Running scheduled tests, installing dependencies with poetry..."
|
||||
poetry install -E scheduled_testing
|
||||
- name: Run tests
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
run: |
|
||||
make scheduled_tests
|
||||
shell: bash
|
||||
secrets: inherit
|
1
.gitignore
vendored
@ -162,6 +162,7 @@ docs/.docusaurus/
|
||||
docs/.cache-loader/
|
||||
docs/_dist
|
||||
docs/api_reference/api_reference.rst
|
||||
docs/api_reference/experimental_api_reference.rst
|
||||
docs/api_reference/_build
|
||||
docs/api_reference/*/
|
||||
!docs/api_reference/_static/
|
||||
|
@ -43,6 +43,10 @@ Now:
|
||||
|
||||
`from langchain_experimental.sql import SQLDatabaseChain`
|
||||
|
||||
Alternatively, if you are just interested in using the query generation part of the SQL chain, you can check out [`create_sql_query_chain`](https://github.com/langchain-ai/langchain/blob/master/docs/extras/use_cases/tabular/sql_query.ipynb)
|
||||
|
||||
`from langchain.chains import create_sql_query_chain`
|
||||
|
||||
## `load_prompt` for Python files
|
||||
|
||||
Note: this only applies if you want to load Python files as prompts.
|
||||
|
7
Makefile
@ -43,7 +43,12 @@ spell_fix:
|
||||
|
||||
help:
|
||||
@echo '----'
|
||||
@echo 'coverage - run unit tests and generate coverage report'
|
||||
@echo 'clean - run docs_clean and api_docs_clean'
|
||||
@echo 'docs_build - build the documentation'
|
||||
@echo 'docs_clean - clean the documentation build artifacts'
|
||||
@echo 'docs_linkcheck - run linkchecker on the documentation'
|
||||
@echo 'api_docs_build - build the API Reference documentation'
|
||||
@echo 'api_docs_clean - clean the API Reference documentation build artifacts'
|
||||
@echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation'
|
||||
@echo 'spell_check - run codespell on the project'
|
||||
@echo 'spell_fix - run codespell on the project and fix the errors'
|
@ -12,14 +12,14 @@
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
|
||||
[](https://codespaces.new/hwchase17/langchain)
|
||||
[](https://star-history.com/#hwchase17/langchain)
|
||||
[](https://libraries.io/github/hwchase17/langchain)
|
||||
[](https://libraries.io/github/langchain-ai/langchain)
|
||||
[](https://github.com/hwchase17/langchain/issues)
|
||||
|
||||
|
||||
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
|
||||
|
||||
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
|
||||
Please fill out [this form](https://forms.gle/57d8AmXBYp8PP8tZA) and we'll set up a dedicated support Slack channel.
|
||||
**Production Support:** As you move your LangChains into production, we'd love to offer more hands-on support.
|
||||
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to share more about what you're building, and our team will get in touch.
|
||||
|
||||
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28
|
||||
|
||||
|
@ -13,5 +13,6 @@ cp -r {docs_skeleton,snippets} _dist
|
||||
cp -r extras/* _dist/docs_skeleton/docs
|
||||
cd _dist/docs_skeleton
|
||||
poetry run nbdoc_build
|
||||
poetry run python generate_api_reference_links.py
|
||||
yarn install
|
||||
yarn start
|
||||
|
@ -7,20 +7,67 @@
|
||||
|
||||
# -- Path setup --------------------------------------------------------------
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import toml
|
||||
from docutils import nodes
|
||||
from sphinx.util.docutils import SphinxDirective
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#
|
||||
import os
|
||||
import sys
|
||||
|
||||
import toml
|
||||
|
||||
_DIR = Path(__file__).parent.absolute()
|
||||
sys.path.insert(0, os.path.abspath("."))
|
||||
sys.path.insert(0, os.path.abspath("../../libs/langchain"))
|
||||
sys.path.insert(0, os.path.abspath("../../libs/experimental"))
|
||||
|
||||
with open("../../libs/langchain/pyproject.toml") as f:
|
||||
with (_DIR.parents[1] / "libs" / "langchain" / "pyproject.toml").open("r") as f:
|
||||
data = toml.load(f)
|
||||
with (_DIR / "guide_imports.json").open("r") as f:
|
||||
imported_classes = json.load(f)
|
||||
|
||||
|
||||
class ExampleLinksDirective(SphinxDirective):
|
||||
"""Directive to generate a list of links to examples.
|
||||
|
||||
We have a script that extracts links to API reference docs
|
||||
from our notebook examples. This directive uses that information
|
||||
to backlink to the examples from the API reference docs."""
|
||||
|
||||
has_content = False
|
||||
required_arguments = 1
|
||||
|
||||
def run(self):
|
||||
"""Run the directive.
|
||||
|
||||
Called any time :example_links:`ClassName` is used
|
||||
in the template *.rst files."""
|
||||
class_or_func_name = self.arguments[0]
|
||||
links = imported_classes.get(class_or_func_name, {})
|
||||
list_node = nodes.bullet_list()
|
||||
for doc_name, link in links.items():
|
||||
item_node = nodes.list_item()
|
||||
para_node = nodes.paragraph()
|
||||
link_node = nodes.reference()
|
||||
link_node["refuri"] = link
|
||||
link_node.append(nodes.Text(doc_name))
|
||||
para_node.append(link_node)
|
||||
item_node.append(para_node)
|
||||
list_node.append(item_node)
|
||||
if list_node.children:
|
||||
title_node = nodes.title()
|
||||
title_node.append(nodes.Text(f"Examples using {class_or_func_name}"))
|
||||
return [title_node, list_node]
|
||||
return [list_node]
|
||||
|
||||
|
||||
def setup(app):
|
||||
app.add_directive("example_links", ExampleLinksDirective)
|
||||
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
|
||||
@ -53,6 +100,9 @@ extensions = [
|
||||
]
|
||||
source_suffix = [".rst"]
|
||||
|
||||
# some autodoc pydantic options are repeated in the actual template.
|
||||
# potentially user error, but there may be bugs in the sphinx extension
|
||||
# with options not being passed through correctly (from either the location in the code)
|
||||
autodoc_pydantic_model_show_json = False
|
||||
autodoc_pydantic_field_list_validators = False
|
||||
autodoc_pydantic_config_members = False
|
||||
@ -65,13 +115,6 @@ autodoc_member_order = "groupwise"
|
||||
autoclass_content = "both"
|
||||
autodoc_typehints_format = "short"
|
||||
|
||||
autodoc_default_options = {
|
||||
"members": True,
|
||||
"show-inheritance": True,
|
||||
"inherited-members": "BaseModel",
|
||||
"undoc-members": True,
|
||||
"special-members": "__call__",
|
||||
}
|
||||
# autodoc_typehints = "description"
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ["templates"]
|
||||
|
@ -1,83 +1,257 @@
|
||||
"""Script for auto-generating api_reference.rst"""
|
||||
import glob
|
||||
import re
|
||||
"""Script for auto-generating api_reference.rst."""
|
||||
import importlib
|
||||
import inspect
|
||||
import typing
|
||||
from pathlib import Path
|
||||
from typing import TypedDict, Sequence, List, Dict, Literal, Union
|
||||
from enum import Enum
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
ROOT_DIR = Path(__file__).parents[2].absolute()
|
||||
HERE = Path(__file__).parent
|
||||
|
||||
PKG_DIR = ROOT_DIR / "libs" / "langchain" / "langchain"
|
||||
WRITE_FILE = Path(__file__).parent / "api_reference.rst"
|
||||
EXP_DIR = ROOT_DIR / "libs" / "experimental" / "langchain_experimental"
|
||||
WRITE_FILE = HERE / "api_reference.rst"
|
||||
EXP_WRITE_FILE = HERE / "experimental_api_reference.rst"
|
||||
|
||||
|
||||
def load_members() -> dict:
|
||||
members: dict = {}
|
||||
for py in glob.glob(str(PKG_DIR) + "/**/*.py", recursive=True):
|
||||
module = py[len(str(PKG_DIR)) + 1 :].replace(".py", "").replace("/", ".")
|
||||
top_level = module.split(".")[0]
|
||||
if top_level not in members:
|
||||
members[top_level] = {"classes": [], "functions": []}
|
||||
with open(py, "r") as f:
|
||||
for line in f.readlines():
|
||||
cls = re.findall(r"^class ([^_].*)\(", line)
|
||||
members[top_level]["classes"].extend([module + "." + c for c in cls])
|
||||
func = re.findall(r"^def ([^_].*)\(", line)
|
||||
afunc = re.findall(r"^async def ([^_].*)\(", line)
|
||||
func_strings = [module + "." + f for f in func + afunc]
|
||||
members[top_level]["functions"].extend(func_strings)
|
||||
return members
|
||||
ClassKind = Literal["TypedDict", "Regular", "Pydantic", "enum"]
|
||||
|
||||
|
||||
def construct_doc(members: dict) -> str:
|
||||
full_doc = """\
|
||||
.. _api_reference:
|
||||
class ClassInfo(TypedDict):
|
||||
"""Information about a class."""
|
||||
|
||||
=============
|
||||
API Reference
|
||||
=============
|
||||
name: str
|
||||
"""The name of the class."""
|
||||
qualified_name: str
|
||||
"""The fully qualified name of the class."""
|
||||
kind: ClassKind
|
||||
"""The kind of the class."""
|
||||
is_public: bool
|
||||
"""Whether the class is public or not."""
|
||||
|
||||
|
||||
class FunctionInfo(TypedDict):
|
||||
"""Information about a function."""
|
||||
|
||||
name: str
|
||||
"""The name of the function."""
|
||||
qualified_name: str
|
||||
"""The fully qualified name of the function."""
|
||||
is_public: bool
|
||||
"""Whether the function is public or not."""
|
||||
|
||||
|
||||
class ModuleMembers(TypedDict):
|
||||
"""A dictionary of module members."""
|
||||
|
||||
classes_: Sequence[ClassInfo]
|
||||
functions: Sequence[FunctionInfo]
|
||||
|
||||
|
||||
def _load_module_members(module_path: str, namespace: str) -> ModuleMembers:
|
||||
"""Load all members of a module.
|
||||
|
||||
Args:
|
||||
module_path: Path to the module.
|
||||
namespace: the namespace of the module.
|
||||
|
||||
Returns:
|
||||
list: A list of loaded module objects.
|
||||
"""
|
||||
classes_: List[ClassInfo] = []
|
||||
functions: List[FunctionInfo] = []
|
||||
module = importlib.import_module(module_path)
|
||||
for name, type_ in inspect.getmembers(module):
|
||||
if not hasattr(type_, "__module__"):
|
||||
continue
|
||||
if type_.__module__ != module_path:
|
||||
continue
|
||||
|
||||
if inspect.isclass(type_):
|
||||
if type(type_) == typing._TypedDictMeta: # type: ignore
|
||||
kind: ClassKind = "TypedDict"
|
||||
elif issubclass(type_, Enum):
|
||||
kind = "enum"
|
||||
elif issubclass(type_, BaseModel):
|
||||
kind = "Pydantic"
|
||||
else:
|
||||
kind = "Regular"
|
||||
|
||||
classes_.append(
|
||||
ClassInfo(
|
||||
name=name,
|
||||
qualified_name=f"{namespace}.{name}",
|
||||
kind=kind,
|
||||
is_public=not name.startswith("_"),
|
||||
)
|
||||
)
|
||||
elif inspect.isfunction(type_):
|
||||
functions.append(
|
||||
FunctionInfo(
|
||||
name=name,
|
||||
qualified_name=f"{namespace}.{name}",
|
||||
is_public=not name.startswith("_"),
|
||||
)
|
||||
)
|
||||
else:
|
||||
continue
|
||||
|
||||
return ModuleMembers(
|
||||
classes_=classes_,
|
||||
functions=functions,
|
||||
)
|
||||
|
||||
|
||||
def _merge_module_members(
|
||||
module_members: Sequence[ModuleMembers],
|
||||
) -> ModuleMembers:
|
||||
"""Merge module members."""
|
||||
classes_: List[ClassInfo] = []
|
||||
functions: List[FunctionInfo] = []
|
||||
for module in module_members:
|
||||
classes_.extend(module["classes_"])
|
||||
functions.extend(module["functions"])
|
||||
|
||||
return ModuleMembers(
|
||||
classes_=classes_,
|
||||
functions=functions,
|
||||
)
|
||||
|
||||
|
||||
def _load_package_modules(
|
||||
package_directory: Union[str, Path]
|
||||
) -> Dict[str, ModuleMembers]:
|
||||
"""Recursively load modules of a package based on the file system.
|
||||
|
||||
Traversal based on the file system makes it easy to determine which
|
||||
of the modules/packages are part of the package vs. 3rd party or built-in.
|
||||
|
||||
Parameters:
|
||||
package_directory: Path to the package directory.
|
||||
|
||||
Returns:
|
||||
list: A list of loaded module objects.
|
||||
"""
|
||||
package_path = (
|
||||
Path(package_directory)
|
||||
if isinstance(package_directory, str)
|
||||
else package_directory
|
||||
)
|
||||
modules_by_namespace = {}
|
||||
|
||||
package_name = package_path.name
|
||||
|
||||
for file_path in package_path.rglob("*.py"):
|
||||
if not file_path.name.startswith("__"):
|
||||
relative_module_name = file_path.relative_to(package_path)
|
||||
# Get the full namespace of the module
|
||||
namespace = str(relative_module_name).replace(".py", "").replace("/", ".")
|
||||
# Keep only the top level namespace
|
||||
top_namespace = namespace.split(".")[0]
|
||||
|
||||
try:
|
||||
module_members = _load_module_members(
|
||||
f"{package_name}.{namespace}", namespace
|
||||
)
|
||||
# Merge module members if the namespace already exists
|
||||
if top_namespace in modules_by_namespace:
|
||||
existing_module_members = modules_by_namespace[top_namespace]
|
||||
_module_members = _merge_module_members(
|
||||
[existing_module_members, module_members]
|
||||
)
|
||||
else:
|
||||
_module_members = module_members
|
||||
|
||||
modules_by_namespace[top_namespace] = _module_members
|
||||
|
||||
except ImportError as e:
|
||||
print(f"Error: Unable to import module '{namespace}' with error: {e}")
|
||||
|
||||
return modules_by_namespace
|
||||
|
||||
|
||||
def _construct_doc(pkg: str, members_by_namespace: Dict[str, ModuleMembers]) -> str:
|
||||
"""Construct the contents of the reference.rst file for the given package.
|
||||
|
||||
Args:
|
||||
pkg: The package name
|
||||
members_by_namespace: The members of the package, dict organized by top level
|
||||
module contains a list of classes and functions
|
||||
inside of the top level namespace.
|
||||
|
||||
Returns:
|
||||
The contents of the reference.rst file.
|
||||
"""
|
||||
full_doc = f"""\
|
||||
=======================
|
||||
``{pkg}`` API Reference
|
||||
=======================
|
||||
|
||||
"""
|
||||
for module, _members in sorted(members.items(), key=lambda kv: kv[0]):
|
||||
classes = _members["classes"]
|
||||
namespaces = sorted(members_by_namespace)
|
||||
|
||||
for module in namespaces:
|
||||
_members = members_by_namespace[module]
|
||||
classes = _members["classes_"]
|
||||
functions = _members["functions"]
|
||||
if not (classes or functions):
|
||||
continue
|
||||
|
||||
module_title = module.replace("_", " ").title()
|
||||
if module_title == "Llms":
|
||||
module_title = "LLMs"
|
||||
section = f":mod:`langchain.{module}`: {module_title}"
|
||||
section = f":mod:`{pkg}.{module}`"
|
||||
underline = "=" * (len(section) + 1)
|
||||
full_doc += f"""\
|
||||
{section}
|
||||
{'=' * (len(section) + 1)}
|
||||
{underline}
|
||||
|
||||
.. automodule:: langchain.{module}
|
||||
.. automodule:: {pkg}.{module}
|
||||
:no-members:
|
||||
:no-inherited-members:
|
||||
|
||||
"""
|
||||
|
||||
if classes:
|
||||
cstring = "\n ".join(sorted(classes))
|
||||
full_doc += f"""\
|
||||
Classes
|
||||
--------------
|
||||
.. currentmodule:: langchain
|
||||
.. currentmodule:: {pkg}
|
||||
|
||||
.. autosummary::
|
||||
:toctree: {module}
|
||||
:template: class.rst
|
||||
|
||||
{cstring}
|
||||
|
||||
"""
|
||||
|
||||
for class_ in classes:
|
||||
if not class_['is_public']:
|
||||
continue
|
||||
|
||||
if class_["kind"] == "TypedDict":
|
||||
template = "typeddict.rst"
|
||||
elif class_["kind"] == "enum":
|
||||
template = "enum.rst"
|
||||
elif class_["kind"] == "Pydantic":
|
||||
template = "pydantic.rst"
|
||||
else:
|
||||
template = "class.rst"
|
||||
|
||||
full_doc += f"""\
|
||||
:template: {template}
|
||||
|
||||
{class_["qualified_name"]}
|
||||
|
||||
"""
|
||||
|
||||
if functions:
|
||||
fstring = "\n ".join(sorted(functions))
|
||||
_functions = [f["qualified_name"] for f in functions if f["is_public"]]
|
||||
fstring = "\n ".join(sorted(_functions))
|
||||
full_doc += f"""\
|
||||
Functions
|
||||
--------------
|
||||
.. currentmodule:: langchain
|
||||
.. currentmodule:: {pkg}
|
||||
|
||||
.. autosummary::
|
||||
:toctree: {module}
|
||||
:template: function.rst
|
||||
|
||||
{fstring}
|
||||
|
||||
@ -86,10 +260,17 @@ Functions
|
||||
|
||||
|
||||
def main() -> None:
|
||||
members = load_members()
|
||||
full_doc = construct_doc(members)
|
||||
"""Generate the reference.rst file for each package."""
|
||||
lc_members = _load_package_modules(PKG_DIR)
|
||||
lc_doc = ".. _api_reference:\n\n" + _construct_doc("langchain", lc_members)
|
||||
with open(WRITE_FILE, "w") as f:
|
||||
f.write(full_doc)
|
||||
f.write(lc_doc)
|
||||
exp_members = _load_package_modules(EXP_DIR)
|
||||
exp_doc = ".. _experimental_api_reference:\n\n" + _construct_doc(
|
||||
"langchain_experimental", exp_members
|
||||
)
|
||||
with open(EXP_WRITE_FILE, "w") as f:
|
||||
f.write(exp_doc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
1
docs/api_reference/guide_imports.json
Normal file
@ -1,9 +0,0 @@
|
||||
Evaluation
|
||||
=======================
|
||||
|
||||
LangChain has a number of convenient evaluation chains you can use off the shelf to grade your models' oupputs.
|
||||
|
||||
.. automodule:: langchain.evaluation
|
||||
:members:
|
||||
:undoc-members:
|
||||
:inherited-members:
|
@ -1,4 +1,5 @@
|
||||
-e libs/langchain
|
||||
-e libs/experimental
|
||||
autodoc_pydantic==1.8.0
|
||||
myst_parser
|
||||
nbsphinx==0.8.9
|
||||
@ -10,4 +11,4 @@ sphinx-panels
|
||||
toml
|
||||
myst_nb
|
||||
sphinx_copybutton
|
||||
pydata-sphinx-theme==0.13.1
|
||||
pydata-sphinx-theme==0.13.1
|
||||
|
@ -5,17 +5,6 @@
|
||||
|
||||
.. autoclass:: {{ objname }}
|
||||
|
||||
{% block methods %}
|
||||
{% if methods %}
|
||||
.. rubric:: {{ _('Methods') }}
|
||||
|
||||
.. autosummary::
|
||||
{% for item in methods %}
|
||||
~{{ name }}.{{ item }}
|
||||
{%- endfor %}
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
|
||||
{% block attributes %}
|
||||
{% if attributes %}
|
||||
.. rubric:: {{ _('Attributes') }}
|
||||
@ -26,3 +15,22 @@
|
||||
{%- endfor %}
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
|
||||
{% block methods %}
|
||||
{% if methods %}
|
||||
.. rubric:: {{ _('Methods') }}
|
||||
|
||||
.. autosummary::
|
||||
{% for item in methods %}
|
||||
~{{ name }}.{{ item }}
|
||||
{%- endfor %}
|
||||
|
||||
{% for item in methods %}
|
||||
.. automethod:: {{ name }}.{{ item }}
|
||||
{%- endfor %}
|
||||
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
|
||||
|
||||
.. example_links:: {{ objname }}
|
14
docs/api_reference/templates/enum.rst
Normal file
@ -0,0 +1,14 @@
|
||||
:mod:`{{module}}`.{{objname}}
|
||||
{{ underline }}==============
|
||||
|
||||
.. currentmodule:: {{ module }}
|
||||
|
||||
.. autoclass:: {{ objname }}
|
||||
|
||||
{% block attributes %}
|
||||
{% for item in attributes %}
|
||||
.. autoattribute:: {{ item }}
|
||||
{% endfor %}
|
||||
{% endblock %}
|
||||
|
||||
.. example_links:: {{ objname }}
|
8
docs/api_reference/templates/function.rst
Normal file
@ -0,0 +1,8 @@
|
||||
:mod:`{{module}}`.{{objname}}
|
||||
{{ underline }}==============
|
||||
|
||||
.. currentmodule:: {{ module }}
|
||||
|
||||
.. autofunction:: {{ objname }}
|
||||
|
||||
.. example_links:: {{ objname }}
|
22
docs/api_reference/templates/pydantic.rst
Normal file
@ -0,0 +1,22 @@
|
||||
:mod:`{{module}}`.{{objname}}
|
||||
{{ underline }}==============
|
||||
|
||||
.. currentmodule:: {{ module }}
|
||||
|
||||
.. autopydantic_model:: {{ objname }}
|
||||
:model-show-json: False
|
||||
:model-show-config-summary: False
|
||||
:model-show-validator-members: False
|
||||
:model-show-field-summary: False
|
||||
:field-signature-prefix: param
|
||||
:members:
|
||||
:undoc-members:
|
||||
:inherited-members:
|
||||
:member-order: groupwise
|
||||
:show-inheritance: True
|
||||
:special-members: __call__
|
||||
|
||||
{% block attributes %}
|
||||
{% endblock %}
|
||||
|
||||
.. example_links:: {{ objname }}
|
14
docs/api_reference/templates/typeddict.rst
Normal file
@ -0,0 +1,14 @@
|
||||
:mod:`{{module}}`.{{objname}}
|
||||
{{ underline }}==============
|
||||
|
||||
.. currentmodule:: {{ module }}
|
||||
|
||||
.. autoclass:: {{ objname }}
|
||||
|
||||
{% block attributes %}
|
||||
{% for item in attributes %}
|
||||
.. autoattribute:: {{ item }}
|
||||
{% endfor %}
|
||||
{% endblock %}
|
||||
|
||||
.. example_links:: {{ objname }}
|
@ -19,7 +19,7 @@
|
||||
{% block htmltitle %}
|
||||
<title>{{ title|striptags|e }}{{ titlesuffix }}</title>
|
||||
{% endblock %}
|
||||
<link rel="canonical" href="http://scikit-learn.org/stable/{{pagename}}.html" />
|
||||
<link rel="canonical" href="https://api.python.langchain.com/en/latest/{{pagename}}.html" />
|
||||
|
||||
{% if favicon_url %}
|
||||
<link rel="shortcut icon" href="{{ favicon_url|e }}"/>
|
||||
|
@ -6,17 +6,6 @@
|
||||
{%- set top_container_cls = "sk-landing-container" %}
|
||||
{%- endif %}
|
||||
|
||||
{% if theme_link_to_live_contributing_page|tobool %}
|
||||
{# Link to development page for live builds #}
|
||||
{%- set development_link = "https://scikit-learn.org/dev/developers/index.html" %}
|
||||
{# Open on a new development page in new window/tab for live builds #}
|
||||
{%- set development_attrs = 'target="_blank" rel="noopener noreferrer"' %}
|
||||
{%- else %}
|
||||
{%- set development_link = pathto('developers/index') %}
|
||||
{%- set development_attrs = '' %}
|
||||
{%- endif %}
|
||||
|
||||
|
||||
<nav id="navbar" class="{{ nav_bar_class }} navbar navbar-expand-md navbar-light bg-light py-0">
|
||||
<div class="container-fluid {{ top_container_cls }} px-0">
|
||||
{%- if logo_url %}
|
||||
@ -45,6 +34,9 @@
|
||||
<li class="nav-item">
|
||||
<a class="sk-nav-link nav-link" href="{{ pathto('api_reference') }}">API</a>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<a class="sk-nav-link nav-link" href="{{ pathto('experimental_api_reference') }}">Experimental</a>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<a class="sk-nav-link nav-link" target="_blank" rel="noopener noreferrer" href="https://python.langchain.com/">Python Docs</a>
|
||||
</li>
|
||||
|
@ -745,6 +745,11 @@ span.descname {
|
||||
background-color: transparent;
|
||||
padding: 0;
|
||||
font-family: monospace;
|
||||
font-size: 1.2rem;
|
||||
}
|
||||
|
||||
em.property {
|
||||
font-weight: normal;
|
||||
}
|
||||
|
||||
span.descclassname {
|
||||
|
@ -0,0 +1,24 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
# Comparison Evaluators
|
||||
|
||||
Comparison evaluators in LangChain help measure two different chain or LLM outputs. These evaluators are helpful for comparative analyses, such as A/B testing between two language models, or comparing different versions of the same model. They can also be useful for things like generating preference scores for ai-assisted reinforcement learning.
|
||||
|
||||
These evaluators inherit from the `PairwiseStringEvaluator` class, providing a comparison interface for two strings - typically, the outputs from two different prompts or models, or two versions of the same model. In essence, a comparison evaluator performs an evaluation on a pair of strings and returns a dictionary containing the evaluation score and other relevant details.
|
||||
|
||||
To create a custom comparison evaluator, inherit from the `PairwiseStringEvaluator` class and overwrite the `_evaluate_string_pairs` method. If you require asynchronous evaluation, also overwrite the `_aevaluate_string_pairs` method.
|
||||
|
||||
Here's a summary of the key methods and properties of a comparison evaluator:
|
||||
|
||||
- `evaluate_string_pairs`: Evaluate the output string pairs. This function should be overwritten when creating custom evaluators.
|
||||
- `aevaluate_string_pairs`: Asynchronously evaluate the output string pairs. This function should be overwritten for asynchronous evaluation.
|
||||
- `requires_input`: This property indicates whether this evaluator requires an input string.
|
||||
- `requires_reference`: This property specifies whether this evaluator requires a reference label.
|
||||
|
||||
Detailed information about creating custom evaluators and the available built-in comparison evaluators are provided in the following sections.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
31
docs/docs_skeleton/docs/guides/evaluation/index.mdx
Normal file
@ -0,0 +1,31 @@
|
||||
---
|
||||
sidebar_position: 6
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
# Evaluation
|
||||
|
||||
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.
|
||||
|
||||
The guides in this section review the APIs and functionality LangChain provides to help yous better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
|
||||
|
||||
LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios.
|
||||
|
||||
Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to your unique requirements. Here are some of the types of evaluators we offer:
|
||||
|
||||
- [String Evaluators](/docs/guides/evaluation/string/): These evaluators assess the predicted string for a given input, usually comparing it against a reference string.
|
||||
- [Trajectory Evaluators](/docs/guides/evaluation/trajectory/): These are used to evaluate the entire trajectory of agent actions.
|
||||
- [Comparison Evaluators](/docs/guides/evaluation/comparison/): These evaluators are designed to compare predictions from two runs on a common input.
|
||||
|
||||
These evaluators can be used across various scenarios and can be applied to different chain and LLM implementations in the LangChain library.
|
||||
|
||||
We also are working to share guides and cookbooks that demonstrate how to use these evaluators in real-world scenarios, such as:
|
||||
|
||||
- [Chain Comparisons](/docs/guides/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
|
||||
|
||||
## Reference Docs
|
||||
|
||||
For detailed information on the available evaluators, including how to instantiate, configure, and customize them, check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.
|
||||
|
||||
<DocCardList />
|
27
docs/docs_skeleton/docs/guides/evaluation/string/index.mdx
Normal file
@ -0,0 +1,27 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
# String Evaluators
|
||||
|
||||
A string evaluator is a component within LangChain designed to assess the performance of a language model by comparing its generated outputs (predictions) to a reference string or an input. This comparison is a crucial step in the evaluation of language models, providing a measure of the accuracy or quality of the generated text.
|
||||
|
||||
In practice, string evaluators are typically used to evaluate a predicted string against a given input, such as a question or a prompt. Often, a reference label or context string is provided to define what a correct or ideal response would look like. These evaluators can be customized to tailor the evaluation process to fit your application's specific requirements.
|
||||
|
||||
To create a custom string evaluator, inherit from the `StringEvaluator` class and implement the `_evaluate_strings` method. If you require asynchronous support, also implement the `_aevaluate_strings` method.
|
||||
|
||||
Here's a summary of the key attributes and methods associated with a string evaluator:
|
||||
|
||||
- `evaluation_name`: Specifies the name of the evaluation.
|
||||
- `requires_input`: Boolean attribute that indicates whether the evaluator requires an input string. If True, the evaluator will raise an error when the input isn't provided. If False, a warning will be logged if an input _is_ provided, indicating that it will not be considered in the evaluation.
|
||||
- `requires_reference`: Boolean attribute specifying whether the evaluator requires a reference label. If True, the evaluator will raise an error when the reference isn't provided. If False, a warning will be logged if a reference _is_ provided, indicating that it will not be considered in the evaluation.
|
||||
|
||||
String evaluators also implement the following methods:
|
||||
|
||||
- `aevaluate_strings`: Asynchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
|
||||
- `evaluate_strings`: Synchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
|
||||
|
||||
The following sections provide detailed information on available string evaluator implementations as well as how to create a custom string evaluator.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
@ -0,0 +1,28 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
---
|
||||
# Trajectory Evaluators
|
||||
|
||||
Trajectory Evaluators in LangChain provide a more holistic approach to evaluating an agent. These evaluators assess the full sequence of actions taken by an agent and their corresponding responses, which we refer to as the "trajectory". This allows you to better measure an agent's effectiveness and capabilities.
|
||||
|
||||
A Trajectory Evaluator implements the `AgentTrajectoryEvaluator` interface, which requires two main methods:
|
||||
|
||||
- `evaluate_agent_trajectory`: This method synchronously evaluates an agent's trajectory.
|
||||
- `aevaluate_agent_trajectory`: This asynchronous counterpart allows evaluations to be run in parallel for efficiency.
|
||||
|
||||
Both methods accept three main parameters:
|
||||
|
||||
- `input`: The initial input given to the agent.
|
||||
- `prediction`: The final predicted response from the agent.
|
||||
- `agent_trajectory`: The intermediate steps taken by the agent, given as a list of tuples.
|
||||
|
||||
These methods return a dictionary. It is recommended that custom implementations return a `score` (a float indicating the effectiveness of the agent) and `reasoning` (a string explaining the reasoning behind the score).
|
||||
|
||||
You can capture an agent's trajectory by initializing the agent with the `return_intermediate_steps=True` parameter. This lets you collect all intermediate steps without relying on special callbacks.
|
||||
|
||||
For a deeper dive into the implementation and use of Trajectory Evaluators, refer to the sections below.
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
|
@ -0,0 +1,9 @@
|
||||
# LangChain Expression Language
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
LangChain Expression Language is a declarative way to easily compose chains together.
|
||||
Any chain constructed this way will automatically have full sync, async, and streaming support.
|
||||
See guides below for how to interact with chains constructed this way as well as cookbook examples.
|
||||
|
||||
<DocCardList />
|
6
docs/docs_skeleton/docs/guides/safety/index.mdx
Normal file
@ -0,0 +1,6 @@
|
||||
# Preventing harmful outputs
|
||||
|
||||
One of the key concerns with using LLMs is that they may generate harmful or unethical text. This is an area of active research in the field. Here we present some built-in chains inspired by this research, which are intended to make the outputs of LLMs safer.
|
||||
|
||||
- [Moderation chain](/docs/use_cases/safety/moderation): Explicitly check if any output text is harmful and flag it.
|
||||
- [Constitutional chain](/docs/use_cases/safety/constitutional_chain): Prompt the model with a set of principles which should guide it's behavior.
|
@ -28,7 +28,7 @@ navigating around a browser.
|
||||
### [OpenAI Functions](/docs/modules/agents/agent_types/openai_functions_agent.html)
|
||||
|
||||
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been explicitly fine-tuned to detect when a
|
||||
function should to be called and respond with the inputs that should be passed to the function.
|
||||
function should be called and respond with the inputs that should be passed to the function.
|
||||
The OpenAI Functions Agent is designed to work with these models.
|
||||
|
||||
### [Conversational](/docs/modules/agents/agent_types/chat_conversation_agent.html)
|
||||
|
@ -1,6 +1,6 @@
|
||||
# OpenAI functions
|
||||
|
||||
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should to be called and respond with the inputs that should be passed to the function.
|
||||
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function.
|
||||
In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions.
|
||||
The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.
|
||||
|
||||
|
@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
---
|
||||
# Additional
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
@ -1,7 +0,0 @@
|
||||
# Dynamically selecting from multiple prompts
|
||||
|
||||
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the `MultiPromptChain` to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.
|
||||
|
||||
import Example from "@snippets/modules/chains/additional/multi_prompt_router.mdx"
|
||||
|
||||
<Example/>
|
@ -1,6 +1,6 @@
|
||||
# Sequential
|
||||
|
||||
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
|
||||
|
||||
|
||||
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
|
||||
|
||||
|
@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
# Popular
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
@ -1,8 +0,0 @@
|
||||
# Summarization
|
||||
|
||||
A summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.
|
||||
|
||||
import Example from "@snippets/modules/chains/popular/summarize.mdx"
|
||||
|
||||
<Example/>
|
||||
|
@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
# Comparison Evaluators
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
@ -1,28 +0,0 @@
|
||||
---
|
||||
sidebar_position: 6
|
||||
---
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
# Evaluation
|
||||
|
||||
Language models can be unpredictable. This makes it challenging to ship reliable applications to production, where repeatable, useful outcomes across diverse inputs are a minimum requirement. Tests help demonstrate each component in an LLM application can produce the required or expected functionality. These tests also safeguard against regressions while you improve interconnected pieces of an integrated system. However, measuring the quality of generated text can be challenging. It can be hard to agree on the right set of metrics for your application, and it can be difficult to translate those into better performance. Furthermore, it's common to lack sufficient evaluation data to adequately test the range of inputs and expected outputs for each component when you're just getting started. The LangChain community is building open source tools and guides to help address these challenges.
|
||||
|
||||
LangChain exposes different types of evaluators for common types of evaluation. Each type has off-the-shelf implementations you can use to get started, as well as an
|
||||
extensible API so you can create your own or contribute improvements for everyone to use. The following sections have example notebooks for you to get started.
|
||||
|
||||
- [String Evaluators](/docs/modules/evaluation/string/): Evaluate the predicted string for a given input, usually against a reference string
|
||||
- [Trajectory Evaluators](/docs/modules/evaluation/trajectory/): Evaluate the whole trajectory of agent actions
|
||||
- [Comparison Evaluators](/docs/modules/evaluation/comparison/): Compare predictions from two runs on a common input
|
||||
|
||||
|
||||
This section also provides some additional examples of how you could use these evaluators for different scenarios or apply to different chain implementations in the LangChain library. Some examples include:
|
||||
|
||||
- [Preference Scoring Chain Outputs](/docs/modules/evaluation/examples/comparisons): An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scores
|
||||
|
||||
|
||||
## Reference Docs
|
||||
|
||||
For detailed information of the available evaluators, including how to instantiate, configure, and customize them. Check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.
|
||||
|
||||
<DocCardList />
|
@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
# String Evaluators
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
---
|
||||
# Trajectory Evaluators
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
@ -18,5 +18,3 @@ Let chains choose which tools to use given high-level directives
|
||||
Persist application state between runs of a chain
|
||||
#### [Callbacks](/docs/modules/callbacks/)
|
||||
Log and stream intermediate steps of any chain
|
||||
#### [Evaluation](/docs/modules/evaluation/)
|
||||
Evaluate the performance of a chain.
|
@ -0,0 +1,17 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# Chat Messages
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/memory/) for documentation on built-in memory integrations with 3rd-party databases and tools.
|
||||
:::
|
||||
|
||||
One of the core utility classes underpinning most (if not all) memory modules is the `ChatMessageHistory` class.
|
||||
This is a super lightweight wrapper which exposes convenience methods for saving Human messages, AI messages, and then fetching them all.
|
||||
|
||||
You may want to use this class directly if you are managing memory outside of a chain.
|
||||
|
||||
import GetStarted from "@snippets/modules/memory/chat_messages/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
@ -1,34 +1,62 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
# Memory
|
||||
|
||||
🚧 _Docs under construction_ 🚧
|
||||
Most LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation.
|
||||
At bare minimum, a conversational system should be able to access some window of past messages directly.
|
||||
A more complex system will need to have a world model that it is constantly updating, which allows it to do things like maintain information about entities and their relationships.
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/memory/) for documentation on built-in memory integrations with 3rd-party tools.
|
||||
:::
|
||||
We call this ability to store information about past interactions "memory".
|
||||
LangChain provides a lot of utilities for adding memory to a system.
|
||||
These utilities can be used by themselves or incorporated seamlessly into a chain.
|
||||
|
||||
By default, Chains and Agents are stateless,
|
||||
meaning that they treat each incoming query independently (like the underlying LLMs and chat models themselves).
|
||||
In some applications, like chatbots, it is essential
|
||||
to remember previous interactions, both in the short and long-term.
|
||||
The **Memory** class does exactly that.
|
||||
A memory system needs to support two basic actions: reading and writing.
|
||||
Recall that every chain defines some core execution logic that expects certain inputs.
|
||||
Some of these inputs come directly from the user, but some of these inputs can come from memory.
|
||||
A chain will interact with its memory system twice in a given run.
|
||||
1. AFTER receiving the initial user inputs but BEFORE executing the core logic, a chain will READ from its memory system and augment the user inputs.
|
||||
2. AFTER executing the core logic but BEFORE returning the answer, a chain will WRITE the inputs and outputs of the current run to memory, so that they can be referred to in future runs.
|
||||
|
||||
LangChain provides memory components in two forms.
|
||||
First, LangChain provides helper utilities for managing and manipulating previous chat messages.
|
||||
These are designed to be modular and useful regardless of how they are used.
|
||||
Secondly, LangChain provides easy ways to incorporate these utilities into chains.
|
||||

|
||||
|
||||
|
||||
## Building memory into a system
|
||||
The two core design decisions in any memory system are:
|
||||
- How state is stored
|
||||
- How state is queried
|
||||
|
||||
### Storing: List of chat messages
|
||||
Underlying any memory is a history of all chat interactions.
|
||||
Even if these are not all used directly, they need to be stored in some form.
|
||||
One of the key parts of the LangChain memory module is a series of integrations for storing these chat messages,
|
||||
from in-memory lists to persistent databases.
|
||||
|
||||
- [Chat message storage](/docs/modules/memory/chat_messages/): How to work with Chat Messages, and the various integrations offered
|
||||
|
||||
### Querying: Data structures and algorithms on top of chat messages
|
||||
Keeping a list of chat messages is fairly straight-forward.
|
||||
What is less straight-forward are the data structures and algorithms built on top of chat messages that serve a view of those messages that is most useful.
|
||||
|
||||
A very simply memory system might just return the most recent messages each run. A slightly more complex memory system might return a succinct summary of the past K messages.
|
||||
An even more sophisticated system might extract entities from stored messages and only return information about entities referenced in the current run.
|
||||
|
||||
Each application can have different requirements for how memory is queried. The memory module should make it easy to both get started with simple memory systems and write your own custom systems if needed.
|
||||
|
||||
- [Memory types](/docs/modules/memory/types/): The various data structures and algorithms that make up the memory types LangChain supports
|
||||
|
||||
## Get started
|
||||
|
||||
Memory involves keeping a concept of state around throughout a user's interactions with an language model. A user's interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type.
|
||||
|
||||
In general, for each type of memory there are two ways to understanding using memory. These are the standalone functions which extract information from a sequence of messages, and then there is the way you can use this type of memory in a chain.
|
||||
|
||||
Memory can return multiple pieces of information (for example, the most recent N messages and a summary of all previous messages). The returned information can either be a string or a list of messages.
|
||||
Let's take a look at what Memory actually looks like in LangChain.
|
||||
Here we'll cover the basics of interacting with an arbitrary memory class.
|
||||
|
||||
import GetStarted from "@snippets/modules/memory/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
|
||||
## Next steps
|
||||
|
||||
And that's it for getting started!
|
||||
Please see the other sections for walkthroughs of more advanced topics,
|
||||
like custom memory, multiple memories, and more.
|
||||
|
||||
|
@ -4,6 +4,6 @@ This notebook shows how to use `ConversationBufferMemory`. This memory allows fo
|
||||
|
||||
We can first extract it as a string.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/buffer.mdx"
|
||||
import Example from "@snippets/modules/memory/types/buffer.mdx"
|
||||
|
||||
<Example/>
|
@ -4,6 +4,6 @@
|
||||
|
||||
Let's first explore the basic functionality of this type of memory.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/buffer_window.mdx"
|
||||
import Example from "@snippets/modules/memory/types/buffer_window.mdx"
|
||||
|
||||
<Example/>
|
@ -4,6 +4,6 @@ Entity Memory remembers given facts about specific entities in a conversation. I
|
||||
|
||||
Let's first walk through using this functionality.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/entity_summary_memory.mdx"
|
||||
import Example from "@snippets/modules/memory/types/entity_summary_memory.mdx"
|
||||
|
||||
<Example/>
|
8
docs/docs_skeleton/docs/modules/memory/types/index.mdx
Normal file
@ -0,0 +1,8 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
# Memory Types
|
||||
|
||||
There are many different types of memory.
|
||||
Each have their own parameters, their own return types, and are useful in different scenarios.
|
||||
Please see their individual page for more detail on each one.
|
@ -4,6 +4,6 @@ Conversation summary memory summarizes the conversation as it happens and stores
|
||||
|
||||
Let's first explore the basic functionality of this type of memory.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/summary.mdx"
|
||||
import Example from "@snippets/modules/memory/types/summary.mdx"
|
||||
|
||||
<Example/>
|
@ -6,6 +6,6 @@ This differs from most of the other Memory classes in that it doesn't explicitly
|
||||
|
||||
In this case, the "docs" are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/vectorstore_retriever_memory.mdx"
|
||||
import Example from "@snippets/modules/memory/types/vectorstore_retriever_memory.mdx"
|
||||
|
||||
<Example/>
|
@ -8,7 +8,7 @@ Head to [Integrations](/docs/integrations/llms/) for documentation on built-in i
|
||||
:::
|
||||
|
||||
Large Language Models (LLMs) are a core component of LangChain.
|
||||
LangChain does not serve it's own LLMs, but rather provides a standard interface for interacting with many different LLMs.
|
||||
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.
|
||||
|
||||
## Get started
|
||||
|
||||
|
@ -3,10 +3,12 @@ sidebar_position: 0
|
||||
---
|
||||
# Prompts
|
||||
|
||||
The new way of programming models is through prompts.
|
||||
A **prompt** refers to the input to the model.
|
||||
This input is often constructed from multiple components.
|
||||
LangChain provides several classes and functions to make constructing and working with prompts easy.
|
||||
A prompt for a language model is a set of instructions or input provided by a user to
|
||||
guide the model's response, helping it understand the context and generate relevant
|
||||
and coherent language-based output, such as answering questions, completing sentences,
|
||||
or engaging in a conversation.
|
||||
|
||||
- [Prompt templates](/docs/modules/model_io/prompts/prompt_templates/): Parametrize model inputs
|
||||
LangChain provides several classes and functions to help construct and work with prompts.
|
||||
|
||||
- [Prompt templates](/docs/modules/model_io/prompts/prompt_templates/): Parametrized model inputs
|
||||
- [Example selectors](/docs/modules/model_io/prompts/example_selectors/): Dynamically select examples to include in prompts
|
@ -4,18 +4,15 @@ sidebar_position: 0
|
||||
|
||||
# Prompt templates
|
||||
|
||||
Language models take text as input - that text is commonly referred to as a prompt.
|
||||
Typically this is not simply a hardcoded string but rather a combination of a template, some examples, and user input.
|
||||
LangChain provides several classes and functions to make constructing and working with prompts easy.
|
||||
Prompt templates are pre-defined recipes for generating prompts for language models.
|
||||
|
||||
## What is a prompt template?
|
||||
A template may include instructions, few shot examples, and specific context and
|
||||
questions appropriate for a given task.
|
||||
|
||||
A prompt template refers to a reproducible way to generate a prompt. It contains a text string ("the template"), that can take in a set of parameters from the end user and generates a prompt.
|
||||
LangChain provides tooling to create and work with prompt templates.
|
||||
|
||||
A prompt template can contain:
|
||||
- instructions to the language model,
|
||||
- a set of few shot examples to help the language model generate a better response,
|
||||
- a question to the language model.
|
||||
LangChain strives to create model agnostic templates to make it easy to reuse
|
||||
existing templates across different language models.
|
||||
|
||||
import GetStarted from "@snippets/modules/model_io/prompts/prompt_templates/get_started.mdx"
|
||||
|
||||
|
@ -0,0 +1 @@
|
||||
label: 'How to'
|
@ -2,7 +2,7 @@
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
# Conversational Retrieval QA
|
||||
# Store and reference chat history
|
||||
The ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.
|
||||
|
||||
It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question answering chain to return a response.
|
@ -1,4 +1,4 @@
|
||||
# Dynamically selecting from multiple retrievers
|
||||
# Dynamically select from multiple retrievers
|
||||
|
||||
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the `MultiRetrievalQAChain` to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Document QA
|
||||
# QA over in-memory documents
|
||||
|
||||
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](/docs/modules/chains/document/).
|
||||
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# Retrieval QA
|
||||
# QA using a Retriever
|
||||
|
||||
This example showcases question answering over an index.
|
||||
|
@ -128,6 +128,10 @@ const config = {
|
||||
hideable: true,
|
||||
},
|
||||
},
|
||||
colorMode: {
|
||||
disableSwitch: false,
|
||||
respectPrefersColorScheme: true,
|
||||
},
|
||||
prism: {
|
||||
theme: {
|
||||
...baseLightCodeBlockTheme,
|
||||
|
183
docs/docs_skeleton/generate_api_reference_links.py
Normal file
@ -0,0 +1,183 @@
|
||||
import importlib
|
||||
import inspect
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
import argparse
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
# Base URL for all class documentation
|
||||
_BASE_URL = "https://api.python.langchain.com/en/latest/"
|
||||
|
||||
# Regular expression to match Python code blocks
|
||||
code_block_re = re.compile(r"^(```python\n)(.*?)(```\n)", re.DOTALL | re.MULTILINE)
|
||||
# Regular expression to match langchain import lines
|
||||
_IMPORT_RE = re.compile(
|
||||
r"from\s+(langchain\.\w+(\.\w+)*?)\s+import\s+"
|
||||
r"((?:\w+(?:,\s*)?)*" # Match zero or more words separated by a comma+optional ws
|
||||
r"(?:\s*\(.*?\))?)", # Match optional parentheses block
|
||||
re.DOTALL, # Match newlines as well
|
||||
)
|
||||
|
||||
_CURRENT_PATH = Path(__file__).parent.absolute()
|
||||
# Directory where generated markdown files are stored
|
||||
_DOCS_DIR = _CURRENT_PATH / "docs"
|
||||
_JSON_PATH = _CURRENT_PATH.parent / "api_reference" / "guide_imports.json"
|
||||
|
||||
|
||||
def find_files(path):
|
||||
"""Find all MDX files in the given path"""
|
||||
# Check if is file first
|
||||
if os.path.isfile(path):
|
||||
yield path
|
||||
return
|
||||
for root, _, files in os.walk(path):
|
||||
for file in files:
|
||||
if file.endswith(".mdx") or file.endswith(".md"):
|
||||
yield os.path.join(root, file)
|
||||
|
||||
|
||||
def get_full_module_name(module_path, class_name):
|
||||
"""Get full module name using inspect"""
|
||||
module = importlib.import_module(module_path)
|
||||
class_ = getattr(module, class_name)
|
||||
return inspect.getmodule(class_).__name__
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--docs_dir",
|
||||
type=str,
|
||||
default=_DOCS_DIR,
|
||||
help="Directory where generated markdown files are stored",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function"""
|
||||
args = get_args()
|
||||
global_imports = {}
|
||||
|
||||
for file in find_files(args.docs_dir):
|
||||
print(f"Adding links for imports in {file}")
|
||||
file_imports = replace_imports(file)
|
||||
|
||||
if file_imports:
|
||||
# Use relative file path as key
|
||||
relative_path = (
|
||||
os.path.relpath(file, _DOCS_DIR).replace(".mdx", "").replace(".md", "")
|
||||
)
|
||||
|
||||
doc_url = f"https://python.langchain.com/docs/{relative_path}"
|
||||
for import_info in file_imports:
|
||||
doc_title = import_info["title"]
|
||||
class_name = import_info["imported"]
|
||||
if class_name not in global_imports:
|
||||
global_imports[class_name] = {}
|
||||
global_imports[class_name][doc_title] = doc_url
|
||||
|
||||
# Write the global imports information to a JSON file
|
||||
_JSON_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
with _JSON_PATH.open("w") as f:
|
||||
json.dump(global_imports, f)
|
||||
|
||||
|
||||
def _get_doc_title(data: str, file_name: str) -> str:
|
||||
try:
|
||||
return re.findall(r"^#\s+(.*)", data, re.MULTILINE)[0]
|
||||
except IndexError:
|
||||
pass
|
||||
# Parse the rst-style titles
|
||||
try:
|
||||
return re.findall(r"^(.*)\n=+\n", data, re.MULTILINE)[0]
|
||||
except IndexError:
|
||||
return file_name
|
||||
|
||||
|
||||
def replace_imports(file):
|
||||
"""Replace imports in each Python code block with links to their
|
||||
documentation and append the import info in a comment"""
|
||||
all_imports = []
|
||||
with open(file, "r") as f:
|
||||
data = f.read()
|
||||
|
||||
file_name = os.path.basename(file)
|
||||
_DOC_TITLE = _get_doc_title(data, file_name)
|
||||
|
||||
def replacer(match):
|
||||
# Extract the code block content
|
||||
code = match.group(2)
|
||||
# Replace if any import comment exists
|
||||
# TODO: Use our own custom <code> component rather than this
|
||||
# injection method
|
||||
existing_comment_re = re.compile(r"^<!--IMPORTS:.*?-->\n", re.MULTILINE)
|
||||
code = existing_comment_re.sub("", code)
|
||||
|
||||
# Process imports in the code block
|
||||
imports = []
|
||||
for import_match in _IMPORT_RE.finditer(code):
|
||||
module = import_match.group(1)
|
||||
imports_str = (
|
||||
import_match.group(3).replace("(\n", "").replace("\n)", "")
|
||||
) # Handle newlines within parentheses
|
||||
# remove any newline and spaces, then split by comma
|
||||
imported_classes = [
|
||||
imp.strip()
|
||||
for imp in re.split(r",\s*", imports_str.replace("\n", ""))
|
||||
if imp.strip()
|
||||
]
|
||||
for class_name in imported_classes:
|
||||
try:
|
||||
module_path = get_full_module_name(module, class_name)
|
||||
except AttributeError as e:
|
||||
logger.warning(f"Could not find module for {class_name}, {e}")
|
||||
continue
|
||||
except ImportError as e:
|
||||
logger.warning(f"Failed to load for class {class_name}, {e}")
|
||||
continue
|
||||
|
||||
url = (
|
||||
_BASE_URL
|
||||
+ module_path.split(".")[1]
|
||||
+ "/"
|
||||
+ module_path
|
||||
+ "."
|
||||
+ class_name
|
||||
+ ".html"
|
||||
)
|
||||
|
||||
# Add the import information to our list
|
||||
imports.append(
|
||||
{
|
||||
"imported": class_name,
|
||||
"source": module,
|
||||
"docs": url,
|
||||
"title": _DOC_TITLE,
|
||||
}
|
||||
)
|
||||
|
||||
if imports:
|
||||
all_imports.extend(imports)
|
||||
# Create a unique comment containing the import information
|
||||
import_comment = f"<!--IMPORTS:{json.dumps(imports)}-->"
|
||||
# Inject the import comment at the start of the code block
|
||||
return match.group(1) + import_comment + "\n" + code + match.group(3)
|
||||
else:
|
||||
# If there are no imports, return the original match
|
||||
return match.group(0)
|
||||
|
||||
# Use re.sub to replace each Python code block
|
||||
data = code_block_re.sub(replacer, data)
|
||||
|
||||
with open(file, "w") as f:
|
||||
f.write(data)
|
||||
return all_imports
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
71
docs/docs_skeleton/package-lock.json
generated
@ -12,7 +12,7 @@
|
||||
"@docusaurus/preset-classic": "2.4.0",
|
||||
"@docusaurus/remark-plugin-npm2yarn": "^2.4.0",
|
||||
"@mdx-js/react": "^1.6.22",
|
||||
"@mendable/search": "^0.0.125",
|
||||
"@mendable/search": "^0.0.137",
|
||||
"clsx": "^1.2.1",
|
||||
"json-loader": "^0.5.7",
|
||||
"process": "^0.11.10",
|
||||
@ -3212,10 +3212,11 @@
|
||||
}
|
||||
},
|
||||
"node_modules/@mendable/search": {
|
||||
"version": "0.0.125",
|
||||
"resolved": "https://registry.npmjs.org/@mendable/search/-/search-0.0.125.tgz",
|
||||
"integrity": "sha512-Mb1J3zDhOyBZV9cXqJocSOBNYGpe8+LQDqd9n9laPWxosSJcSTUewqtlIbMerrYsScBsxskoSiWgRsc7xF5z0Q==",
|
||||
"version": "0.0.137",
|
||||
"resolved": "https://registry.npmjs.org/@mendable/search/-/search-0.0.137.tgz",
|
||||
"integrity": "sha512-2J2fd5eqToK+mLzrSDA6NAr4F1kfql7QRiHpD7AUJJX0nqpvInhr/mMJKBCUSCv2z76UKCmF5wLuPSw+C90Qdg==",
|
||||
"dependencies": {
|
||||
"html-react-parser": "^4.2.0",
|
||||
"posthog-js": "^1.45.1"
|
||||
},
|
||||
"peerDependencies": {
|
||||
@ -8332,6 +8333,33 @@
|
||||
"safe-buffer": "~5.1.0"
|
||||
}
|
||||
},
|
||||
"node_modules/html-dom-parser": {
|
||||
"version": "4.0.0",
|
||||
"resolved": "https://registry.npmjs.org/html-dom-parser/-/html-dom-parser-4.0.0.tgz",
|
||||
"integrity": "sha512-TUa3wIwi80f5NF8CVWzkopBVqVAtlawUzJoLwVLHns0XSJGynss4jiY0mTWpiDOsuyw+afP+ujjMgRh9CoZcXw==",
|
||||
"dependencies": {
|
||||
"domhandler": "5.0.3",
|
||||
"htmlparser2": "9.0.0"
|
||||
}
|
||||
},
|
||||
"node_modules/html-dom-parser/node_modules/htmlparser2": {
|
||||
"version": "9.0.0",
|
||||
"resolved": "https://registry.npmjs.org/htmlparser2/-/htmlparser2-9.0.0.tgz",
|
||||
"integrity": "sha512-uxbSI98wmFT/G4P2zXx4OVx04qWUmyFPrD2/CNepa2Zo3GPNaCaaxElDgwUrwYWkK1nr9fft0Ya8dws8coDLLQ==",
|
||||
"funding": [
|
||||
"https://github.com/fb55/htmlparser2?sponsor=1",
|
||||
{
|
||||
"type": "github",
|
||||
"url": "https://github.com/sponsors/fb55"
|
||||
}
|
||||
],
|
||||
"dependencies": {
|
||||
"domelementtype": "^2.3.0",
|
||||
"domhandler": "^5.0.3",
|
||||
"domutils": "^3.1.0",
|
||||
"entities": "^4.5.0"
|
||||
}
|
||||
},
|
||||
"node_modules/html-entities": {
|
||||
"version": "2.4.0",
|
||||
"resolved": "https://registry.npmjs.org/html-entities/-/html-entities-2.4.0.tgz",
|
||||
@ -8375,6 +8403,20 @@
|
||||
"node": ">= 12"
|
||||
}
|
||||
},
|
||||
"node_modules/html-react-parser": {
|
||||
"version": "4.2.0",
|
||||
"resolved": "https://registry.npmjs.org/html-react-parser/-/html-react-parser-4.2.0.tgz",
|
||||
"integrity": "sha512-gzU55AS+FI6qD7XaKe5BLuLFM2Xw0/LodfMWZlxV9uOHe7LCD5Lukx/EgYuBI3c0kLu0XlgFXnSzO0qUUn3Vrg==",
|
||||
"dependencies": {
|
||||
"domhandler": "5.0.3",
|
||||
"html-dom-parser": "4.0.0",
|
||||
"react-property": "2.0.0",
|
||||
"style-to-js": "1.1.3"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"react": "0.14 || 15 || 16 || 17 || 18"
|
||||
}
|
||||
},
|
||||
"node_modules/html-tags": {
|
||||
"version": "3.3.1",
|
||||
"resolved": "https://registry.npmjs.org/html-tags/-/html-tags-3.3.1.tgz",
|
||||
@ -11762,6 +11804,11 @@
|
||||
"webpack": ">=4.41.1 || 5.x"
|
||||
}
|
||||
},
|
||||
"node_modules/react-property": {
|
||||
"version": "2.0.0",
|
||||
"resolved": "https://registry.npmjs.org/react-property/-/react-property-2.0.0.tgz",
|
||||
"integrity": "sha512-kzmNjIgU32mO4mmH5+iUyrqlpFQhF8K2k7eZ4fdLSOPFrD1XgEuSBv9LDEgxRXTMBqMd8ppT0x6TIzqE5pdGdw=="
|
||||
},
|
||||
"node_modules/react-router": {
|
||||
"version": "5.3.4",
|
||||
"resolved": "https://registry.npmjs.org/react-router/-/react-router-5.3.4.tgz",
|
||||
@ -13127,6 +13174,22 @@
|
||||
"url": "https://github.com/sponsors/sindresorhus"
|
||||
}
|
||||
},
|
||||
"node_modules/style-to-js": {
|
||||
"version": "1.1.3",
|
||||
"resolved": "https://registry.npmjs.org/style-to-js/-/style-to-js-1.1.3.tgz",
|
||||
"integrity": "sha512-zKI5gN/zb7LS/Vm0eUwjmjrXWw8IMtyA8aPBJZdYiQTXj4+wQ3IucOLIOnF7zCHxvW8UhIGh/uZh/t9zEHXNTQ==",
|
||||
"dependencies": {
|
||||
"style-to-object": "0.4.1"
|
||||
}
|
||||
},
|
||||
"node_modules/style-to-js/node_modules/style-to-object": {
|
||||
"version": "0.4.1",
|
||||
"resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-0.4.1.tgz",
|
||||
"integrity": "sha512-HFpbb5gr2ypci7Qw+IOhnP2zOU7e77b+rzM+wTzXzfi1PrtBCX0E7Pk4wL4iTLnhzZ+JgEGAhX81ebTg/aYjQw==",
|
||||
"dependencies": {
|
||||
"inline-style-parser": "0.1.1"
|
||||
}
|
||||
},
|
||||
"node_modules/style-to-object": {
|
||||
"version": "0.3.0",
|
||||
"resolved": "https://registry.npmjs.org/style-to-object/-/style-to-object-0.3.0.tgz",
|
||||
|
@ -23,7 +23,7 @@
|
||||
"@docusaurus/preset-classic": "2.4.0",
|
||||
"@docusaurus/remark-plugin-npm2yarn": "^2.4.0",
|
||||
"@mdx-js/react": "^1.6.22",
|
||||
"@mendable/search": "^0.0.125",
|
||||
"@mendable/search": "^0.0.137",
|
||||
"clsx": "^1.2.1",
|
||||
"json-loader": "^0.5.7",
|
||||
"process": "^0.11.10",
|
||||
|
@ -21,7 +21,7 @@ function Imports({ imports }) {
|
||||
</h4>
|
||||
<ul style={{ paddingBottom: "1rem" }}>
|
||||
{imports.map(({ imported, source, docs }) => (
|
||||
<li>
|
||||
<li key={imported}>
|
||||
<a href={docs}>
|
||||
<span>{imported}</span>
|
||||
</a>{" "}
|
||||
@ -34,14 +34,25 @@ function Imports({ imports }) {
|
||||
}
|
||||
|
||||
export default function CodeBlockWrapper({ children, ...props }) {
|
||||
// Initialize imports as an empty array
|
||||
let imports = [];
|
||||
|
||||
// Check if children is a string
|
||||
if (typeof children === "string") {
|
||||
return <CodeBlock {...props}>{children}</CodeBlock>;
|
||||
// Search for an IMPORTS comment in the code
|
||||
const match = /<!--IMPORTS:(.*?)-->\n/.exec(children);
|
||||
if (match) {
|
||||
imports = JSON.parse(match[1]);
|
||||
children = children.replace(match[0], "");
|
||||
}
|
||||
} else if (children.imports) {
|
||||
imports = children.imports;
|
||||
}
|
||||
|
||||
return (
|
||||
<>
|
||||
<CodeBlock {...props}>{children.content}</CodeBlock>
|
||||
<Imports imports={children.imports} />
|
||||
<CodeBlock {...props}>{children}</CodeBlock>
|
||||
{imports.length > 0 && <Imports imports={imports} />}
|
||||
</>
|
||||
);
|
||||
}
|
||||
}
|
BIN
docs/docs_skeleton/static/img/SQLDatabaseToolkit.png
Normal file
After Width: | Height: | Size: 405 KiB |
BIN
docs/docs_skeleton/static/img/chat_use_case.png
Normal file
After Width: | Height: | Size: 93 KiB |
BIN
docs/docs_skeleton/static/img/chat_use_case_2.png
Normal file
After Width: | Height: | Size: 102 KiB |
BIN
docs/docs_skeleton/static/img/create_sql_query_chain.png
Normal file
After Width: | Height: | Size: 190 KiB |
BIN
docs/docs_skeleton/static/img/extraction.png
Normal file
After Width: | Height: | Size: 125 KiB |
BIN
docs/docs_skeleton/static/img/extraction_trace_function.png
Normal file
After Width: | Height: | Size: 131 KiB |
BIN
docs/docs_skeleton/static/img/extraction_trace_function_2.png
Normal file
After Width: | Height: | Size: 211 KiB |
BIN
docs/docs_skeleton/static/img/extraction_trace_joke.png
Normal file
After Width: | Height: | Size: 132 KiB |
BIN
docs/docs_skeleton/static/img/memory_diagram.png
Normal file
After Width: | Height: | Size: 111 KiB |
BIN
docs/docs_skeleton/static/img/sql_usecase.png
Normal file
After Width: | Height: | Size: 119 KiB |
BIN
docs/docs_skeleton/static/img/sqldbchain_trace.png
Normal file
After Width: | Height: | Size: 266 KiB |
BIN
docs/docs_skeleton/static/img/summarization_use_case_1.png
Normal file
After Width: | Height: | Size: 196 KiB |
BIN
docs/docs_skeleton/static/img/summarization_use_case_2.png
Normal file
After Width: | Height: | Size: 90 KiB |
BIN
docs/docs_skeleton/static/img/summarization_use_case_3.png
Normal file
After Width: | Height: | Size: 174 KiB |
@ -556,6 +556,14 @@
|
||||
"source": "/docs/integrations/llamacpp",
|
||||
"destination": "/docs/integrations/providers/llamacpp"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/integrations/log10.html",
|
||||
"destination": "/docs/integrations/providers/log10"
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/log10",
|
||||
"destination": "/docs/integrations/providers/log10"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/integrations/mediawikidump.html",
|
||||
"destination": "/docs/integrations/providers/mediawikidump"
|
||||
@ -1610,59 +1618,59 @@
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/flare.html",
|
||||
"destination": "/docs/modules/chains/additional/flare"
|
||||
"destination": "/docs/use_cases/question_answering/how_to/flare"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/graph_cypher_qa.html",
|
||||
"destination": "/docs/modules/chains/additional/graph_cypher_qa"
|
||||
"destination": "/docs/use_cases/graph/graph_cypher_qa"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/graph_nebula_qa.html",
|
||||
"destination": "/docs/modules/chains/additional/graph_nebula_qa"
|
||||
"destination": "/docs/use_cases/graph/graph_nebula_qa"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/index_examples/graph_qa.html",
|
||||
"destination": "/docs/modules/chains/additional/graph_qa"
|
||||
"destination": "/docs/use_cases/graph/graph_qa"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/index_examples/hyde.html",
|
||||
"destination": "/docs/modules/chains/additional/hyde"
|
||||
"destination": "/docs/use_cases/question_answering/how_to/hyde"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_bash.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_bash"
|
||||
"destination": "/docs/use_cases/code_writing/llm_bash"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_checker.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_checker"
|
||||
"destination": "/docs/use_cases/self_check/llm_checker"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_math.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_math"
|
||||
"destination": "/docs/use_cases/code_writing/llm_math"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_requests.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_requests"
|
||||
"destination": "/docs/use_cases/apis/llm_requests"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_summarization_checker.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_summarization_checker"
|
||||
"destination": "/docs/use_cases/self_check/llm_summarization_checker"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/openapi.html",
|
||||
"destination": "/docs/modules/chains/additional/openapi"
|
||||
"destination": "/docs/use_cases/apis/openapi"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/pal.html",
|
||||
"destination": "/docs/modules/chains/additional/pal"
|
||||
"destination": "/docs/use_cases/code_writing/pal"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/tagging.html",
|
||||
"destination": "/docs/modules/chains/additional/tagging"
|
||||
"destination": "/docs/use_cases/tagging"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/index_examples/vector_db_text_generation.html",
|
||||
"destination": "/docs/modules/chains/additional/vector_db_text_generation"
|
||||
"destination": "/docs/use_cases/question_answering/how_to/vector_db_text_generation"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/generic/router.html",
|
||||
@ -3448,6 +3456,10 @@
|
||||
"source": "/docs/modules/model_io/models/llms/integrations/writer",
|
||||
"destination": "/docs/integrations/llms/writer"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts.html",
|
||||
"destination": "/docs/modules/model_io/prompts"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/output_parsers.html",
|
||||
"destination": "/docs/modules/model_io/output_parsers/"
|
||||
@ -3472,6 +3484,10 @@
|
||||
"source": "/en/latest/modules/prompts/output_parsers/examples/retry.html",
|
||||
"destination": "/docs/modules/model_io/output_parsers/retry"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/example_selectors.html",
|
||||
"destination": "/docs/modules/model_io/prompts/example_selectors"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html",
|
||||
"destination": "/docs/modules/model_io/prompts/example_selectors/custom_example_selector"
|
||||
@ -3484,6 +3500,10 @@
|
||||
"source": "/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html",
|
||||
"destination": "/docs/modules/model_io/prompts/example_selectors/ngram_overlap"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/prompt_templates.html",
|
||||
"destination": "/docs/modules/model_io/prompts/prompt_templates"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html",
|
||||
"destination": "/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store"
|
||||
@ -3728,6 +3748,18 @@
|
||||
"source": "/docs/modules/model_io/models/chat/integrations/:path*",
|
||||
"destination": "/docs/integrations/chat/:path*"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/evaluation(/?)",
|
||||
"destination": "/docs/guides/evaluation"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/evaluation/:path*(/?)",
|
||||
"destination": "/docs/guides/evaluation/:path*"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/indexes.html",
|
||||
"destination": "/docs/modules/data_connection"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/indexes/:path*",
|
||||
"destination": "/docs/modules/data_connection/:path*"
|
||||
@ -3763,6 +3795,174 @@
|
||||
{
|
||||
"source": "/en/latest/:path*",
|
||||
"destination": "/docs/:path*"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/constitutional_chain",
|
||||
"destination": "/docs/guides/safety/constitutional_chain"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/moderation",
|
||||
"destination": "/docs/guides/safety/moderation"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/api",
|
||||
"destination": "/docs/use_cases/apis/api"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/analyze_document",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/analyze_document"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/chat_vector_db",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/chat_vector_db"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/multi_retrieval_qa_router",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/multi_retrieval_qa_router"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/question_answering",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/question_answering"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/vector_db_qa",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/vector_db_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/summarize",
|
||||
"destination": "/docs/use_cases/summarization/summarize"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/sqlite",
|
||||
"destination": "/docs/use_cases/tabular/sqlite"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/openai_functions",
|
||||
"destination": "/docs/modules/chains/how_to/openai_functions"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_requests",
|
||||
"destination": "/docs/use_cases/apis/llm_requests"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/openai_openapi",
|
||||
"destination": "/docs/use_cases/apis/openai_openapi"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/openapi",
|
||||
"destination": "/docs/use_cases/apis/openapi"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/openapi_openai",
|
||||
"destination": "/docs/use_cases/apis/openapi_openai"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/cpal",
|
||||
"destination": "/docs/use_cases/code_writing/cpal"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_bash",
|
||||
"destination": "/docs/use_cases/code_writing/llm_bash"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_math",
|
||||
"destination": "/docs/use_cases/code_writing/llm_math"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_symbolic_math",
|
||||
"destination": "/docs/use_cases/code_writing/llm_symbolic_math"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/pal",
|
||||
"destination": "/docs/use_cases/code_writing/pal"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_arangodb_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_arangodb_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_cypher_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_cypher_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_hugegraph_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_hugegraph_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_kuzu_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_kuzu_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_nebula_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_nebula_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_sparql_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_sparql_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/neptune_cypher_qa",
|
||||
"destination": "/docs/use_cases/graph/neptune_cypher_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/tot",
|
||||
"destination": "/docs/use_cases/graph/tot"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/question_answering//document-context-aware-QA",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/document-context-aware-QA"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/flare",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/flare"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/hyde",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/hyde"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/question_answering//local_retrieval_qa",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/local_retrieval_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/qa_citations",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/qa_citations"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/vector_db_text_generation",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/vector_db_text_generation"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/openai_functions_retrieval_qa",
|
||||
"destination": "/docs/use_cases/question_answering/integrations/openai_functions_retrieval_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/question_answering//semantic-search-over-chat",
|
||||
"destination": "/docs/use_cases/question_answering/integrations/semantic-search-over-chat"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_checker",
|
||||
"destination": "/docs/use_cases/self_check/llm_checker"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_summarization_checker",
|
||||
"destination": "/docs/use_cases/self_check/llm_summarization_checker"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/elasticsearch_database",
|
||||
"destination": "/docs/use_cases/tabular/elasticsearch_database"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/tagging",
|
||||
"destination": "/docs/use_cases/tagging"
|
||||
},
|
||||
{
|
||||
"source": "docs/integrations/providers/agent_with_wandb_tracing",
|
||||
"destination": "docs/integrations/providers/wandb_tracing"
|
||||
}
|
||||
]
|
||||
}
|
@ -1,10 +1,53 @@
|
||||
#!/bin/bash
|
||||
|
||||
version_compare() {
|
||||
local v1=(${1//./ })
|
||||
local v2=(${2//./ })
|
||||
for i in {0..2}; do
|
||||
if (( ${v1[i]} < ${v2[i]} )); then
|
||||
return 1
|
||||
fi
|
||||
done
|
||||
return 0
|
||||
}
|
||||
|
||||
openssl_version=$(openssl version | awk '{print $2}')
|
||||
required_openssl_version="1.1.1"
|
||||
|
||||
python_version=$(python3 --version 2>&1 | awk '{print $2}')
|
||||
required_python_version="3.10"
|
||||
|
||||
echo "OpenSSL Version"
|
||||
echo $openssl_version
|
||||
echo "Python Version"
|
||||
echo $python_version
|
||||
# If openssl version is less than 1.1.1 AND python version is less than 3.10
|
||||
if ! version_compare $openssl_version $required_openssl_version && ! version_compare $python_version $required_python_version; then
|
||||
### See: https://github.com/urllib3/urllib3/issues/2168
|
||||
# Requests lib breaks for old SSL versions,
|
||||
# which are defaults on Amazon Linux 2 (which Vercel uses for builds)
|
||||
yum -y update
|
||||
yum remove openssl-devel -y
|
||||
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar -y
|
||||
yum install openssl11 -y
|
||||
yum install openssl11-devel -y
|
||||
|
||||
wget https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz
|
||||
tar xzf Python-3.11.4.tgz
|
||||
cd Python-3.11.4
|
||||
./configure
|
||||
make altinstall
|
||||
echo "Python Version"
|
||||
python3.11 --version
|
||||
cd ..
|
||||
fi
|
||||
|
||||
cd ..
|
||||
python3 --version
|
||||
python3 -m venv .venv
|
||||
python3.11 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
python3 -m pip install -r vercel_requirements.txt
|
||||
python3.11 -m pip install --upgrade pip
|
||||
python3.11 -m pip install -r vercel_requirements.txt
|
||||
cp -r extras/* docs_skeleton/docs
|
||||
cd docs_skeleton
|
||||
nbdoc_build
|
||||
python3.11 generate_api_reference_links.py
|
||||
|
@ -31,7 +31,7 @@ There isn't any special setup for it.
|
||||
|
||||
## LLM
|
||||
|
||||
See a [usage example](/docs/modules/model_io/models/llms/integrations/INCLUDE_REAL_NAME.html).
|
||||
See a [usage example](/docs/integrations/llms/INCLUDE_REAL_NAME).
|
||||
|
||||
```python
|
||||
from langchain.llms import integration_class_REPLACE_ME
|
||||
@ -40,7 +40,7 @@ from langchain.llms import integration_class_REPLACE_ME
|
||||
|
||||
## Text Embedding Models
|
||||
|
||||
See a [usage example](/docs/modules/data_connection/text_embedding/integrations/INCLUDE_REAL_NAME.html)
|
||||
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME)
|
||||
|
||||
```python
|
||||
from langchain.embeddings import integration_class_REPLACE_ME
|
||||
@ -49,7 +49,7 @@ from langchain.embeddings import integration_class_REPLACE_ME
|
||||
|
||||
## Chat Models
|
||||
|
||||
See a [usage example](/docs/modules/model_io/models/chat/integrations/INCLUDE_REAL_NAME.html)
|
||||
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)
|
||||
|
||||
```python
|
||||
from langchain.chat_models import integration_class_REPLACE_ME
|
||||
@ -57,7 +57,7 @@ from langchain.chat_models import integration_class_REPLACE_ME
|
||||
|
||||
## Document Loader
|
||||
|
||||
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/INCLUDE_REAL_NAME.html).
|
||||
See a [usage example](/docs/integrations/document_loaders/INCLUDE_REAL_NAME).
|
||||
|
||||
```python
|
||||
from langchain.document_loaders import integration_class_REPLACE_ME
|
||||
|
@ -1,5 +1,6 @@
|
||||
# Tutorials
|
||||
|
||||
Below are links to video tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).
|
||||
|
||||
⛓ icon marks a new addition [last update 2023-07-05]
|
||||
|
||||
|
@ -4,7 +4,7 @@ If you're building with LLMs, at some point something will break, and you'll nee
|
||||
|
||||
Here's a few different tools and functionalities to aid in debugging.
|
||||
|
||||
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
|
||||
|
||||
|
||||
## Tracing
|
||||
|
||||
|
381
docs/extras/guides/evaluation/comparison/pairwise_string.ipynb
Normal file
@ -0,0 +1,381 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2da95378",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Pairwise String Comparison\n",
|
||||
"\n",
|
||||
"Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n",
|
||||
"\n",
|
||||
"- Which LLM or prompt produces a preferred output for a given question?\n",
|
||||
"- Which examples should I include for few-shot example selection?\n",
|
||||
"- Which output is better to include for fintetuning?\n",
|
||||
"\n",
|
||||
"The simplest and often most reliable automated way to choose a preferred prediction for a given input is to use the `pairwise_string` evaluator.\n",
|
||||
"\n",
|
||||
"Check out the reference docs for the [PairwiseStringEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain) for more info."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "f6790c46",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"labeled_pairwise_string\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "49ad9139",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'Both responses are relevant to the question asked, as they both provide a numerical answer to the question about the number of dogs in the park. However, Response A is incorrect according to the reference answer, which states that there are four dogs. Response B, on the other hand, is correct as it matches the reference answer. Neither response demonstrates depth of thought, as they both simply provide a numerical answer without any additional information or context. \\n\\nBased on these criteria, Response B is the better response.\\n',\n",
|
||||
" 'value': 'B',\n",
|
||||
" 'score': 0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_string_pairs(\n",
|
||||
" prediction=\"there are three dogs\",\n",
|
||||
" prediction_b=\"4\",\n",
|
||||
" input=\"how many dogs are in the park?\",\n",
|
||||
" reference=\"four\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7491d2e6-4e77-4b17-be6b-7da966785c1d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Methods\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The pairwise string evaluator can be called using [evaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.evaluate_string_pairs) (or async [aevaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.aevaluate_string_pairs)) methods, which accept:\n",
|
||||
"\n",
|
||||
"- prediction (str) – The predicted response of the first model, chain, or prompt.\n",
|
||||
"- prediction_b (str) – The predicted response of the second model, chain, or prompt.\n",
|
||||
"- input (str) – The input question, prompt, or other text.\n",
|
||||
"- reference (str) – (Only for the labeled_pairwise_string variant) The reference response.\n",
|
||||
"\n",
|
||||
"They return a dictionary with the following values:\n",
|
||||
"- value: 'A' or 'B', indicating whether `prediction` or `prediction_b` is preferred, respectively\n",
|
||||
"- score: Integer 0 or 1 mapped from the 'value', where a score of 1 would mean that the first `prediction` is preferred, and a score of 0 would mean `prediction_b` is preferred.\n",
|
||||
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ed353b93-be71-4479-b9c0-8c97814c2e58",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Without References\n",
|
||||
"\n",
|
||||
"When references aren't available, you can still predict the preferred response.\n",
|
||||
"The results will reflect the evaluation model's preference, which is less reliable and may result\n",
|
||||
"in preferences that are factually incorrect."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "586320da",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"pairwise_string\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "7f56c76e-a39b-4509-8b8a-8a2afe6c3da1",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'Both responses are correct and relevant to the question. However, Response B is more helpful and insightful as it provides a more detailed explanation of what addition is. Response A is correct but lacks depth as it does not explain what the operation of addition entails. \\n\\nFinal Decision: [[B]]',\n",
|
||||
" 'value': 'B',\n",
|
||||
" 'score': 0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_string_pairs(\n",
|
||||
" prediction=\"Addition is a mathematical operation.\",\n",
|
||||
" prediction_b=\"Addition is a mathematical operation that adds two numbers to create a third number, the 'sum'.\",\n",
|
||||
" input=\"What is addition?\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4a09b21d-9851-47e8-93d3-90044b2945b0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"## Defining the Criteria\n",
|
||||
"\n",
|
||||
"By default, the LLM is instructed to select the 'preferred' response based on helpfulness, relevance, correctness, and depth of thought. You can customize the criteria by passing in a `criteria` argument, where the criteria could take any of the following forms:\n",
|
||||
"- [`Criteria`](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.Criteria.html#langchain.evaluation.criteria.eval_chain.Criteria) enum or its string value - to use one of the default criteria and their descriptions\n",
|
||||
"- [Constitutional principal](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html#langchain.chains.constitutional_ai.models.ConstitutionalPrinciple) - use one any of the constitutional principles defined in langchain\n",
|
||||
"- Dictionary: a list of custom criteria, where the key is the name of the criteria, and the value is the description.\n",
|
||||
"- A list of criteria or constitutional principles - to combine multiple criteria in one.\n",
|
||||
"\n",
|
||||
"Below is an example for determining preferred writing responses based on a custom style."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "8539e7d9-f7b0-4d32-9c45-593a7915c093",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"custom_criteria = {\n",
|
||||
" \"simplicity\": \"Is the language straightforward and unpretentious?\",\n",
|
||||
" \"clarity\": \"Are the sentences clear and easy to understand?\",\n",
|
||||
" \"precision\": \"Is the writing precise, with no unnecessary words or details?\",\n",
|
||||
" \"truthfulness\": \"Does the writing feel honest and sincere?\",\n",
|
||||
" \"subtext\": \"Does the writing suggest deeper meanings or themes?\",\n",
|
||||
"}\n",
|
||||
"evaluator = load_evaluator(\"pairwise_string\", criteria=custom_criteria)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "fec7bde8-fbdc-4730-8366-9d90d033c181",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'Response A is simple, clear, and precise. It uses straightforward language to convey a deep and sincere message about families. The metaphor of joy and sorrow as music is effective and easy to understand.\\n\\nResponse B, on the other hand, is more complex and less clear. The language is more pretentious, with words like \"domicile,\" \"resounds,\" \"abode,\" \"dissonant,\" and \"elegy.\" While it conveys a similar message to Response A, it does so in a more convoluted way. The precision is also lacking due to the use of unnecessary words and details.\\n\\nBoth responses suggest deeper meanings or themes about the shared joy and unique sorrow in families. However, Response A does so in a more effective and accessible way.\\n\\nTherefore, the better response is [[A]].',\n",
|
||||
" 'value': 'A',\n",
|
||||
" 'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_string_pairs(\n",
|
||||
" prediction=\"Every cheerful household shares a similar rhythm of joy; but sorrow, in each household, plays a unique, haunting melody.\",\n",
|
||||
" prediction_b=\"Where one finds a symphony of joy, every domicile of happiness resounds in harmonious,\"\n",
|
||||
" \" identical notes; yet, every abode of despair conducts a dissonant orchestra, each\"\n",
|
||||
" \" playing an elegy of grief that is peculiar and profound to its own existence.\",\n",
|
||||
" input=\"Write some prose about families.\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a25b60b2-627c-408a-be4b-a2e5cbc10726",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Customize the LLM\n",
|
||||
"\n",
|
||||
"By default, the loader uses `gpt-4` in the evaluation chain. You can customize this when loading."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "de84a958-1330-482b-b950-68bcf23f9e35",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatAnthropic\n",
|
||||
"\n",
|
||||
"llm = ChatAnthropic(temperature=0)\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"labeled_pairwise_string\", llm=llm)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "e162153f-d50a-4a7c-a033-019dabbc954c",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'Here is my assessment:\\n\\nResponse B is more helpful, insightful, and accurate than Response A. Response B simply states \"4\", which directly answers the question by providing the exact number of dogs mentioned in the reference answer. In contrast, Response A states \"there are three dogs\", which is incorrect according to the reference answer. \\n\\nIn terms of helpfulness, Response B gives the precise number while Response A provides an inaccurate guess. For relevance, both refer to dogs in the park from the question. However, Response B is more correct and factual based on the reference answer. Response A shows some attempt at reasoning but is ultimately incorrect. Response B requires less depth of thought to simply state the factual number.\\n\\nIn summary, Response B is superior in terms of helpfulness, relevance, correctness, and depth. My final decision is: [[B]]\\n',\n",
|
||||
" 'value': 'B',\n",
|
||||
" 'score': 0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_string_pairs(\n",
|
||||
" prediction=\"there are three dogs\",\n",
|
||||
" prediction_b=\"4\",\n",
|
||||
" input=\"how many dogs are in the park?\",\n",
|
||||
" reference=\"four\",\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e0e89c13-d0ad-4f87-8fcb-814399bafa2a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Customize the Evaluation Prompt\n",
|
||||
"\n",
|
||||
"You can use your own custom evaluation prompt to add more task-specific instructions or to instruct the evaluator to score the output.\n",
|
||||
"\n",
|
||||
"*Note: If you use a prompt that expects generates a result in a unique format, you may also have to pass in a custom output parser (`output_parser=your_parser()`) instead of the default `PairwiseStringResultOutputParser`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "fb817efa-3a4d-439d-af8c-773b89d97ec9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"prompt_template = PromptTemplate.from_template(\n",
|
||||
" \"\"\"Given the input context, which do you prefer: A or B?\n",
|
||||
"Evaluate based on the following criteria:\n",
|
||||
"{criteria}\n",
|
||||
"Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.\n",
|
||||
"\n",
|
||||
"DATA\n",
|
||||
"----\n",
|
||||
"input: {input}\n",
|
||||
"reference: {reference}\n",
|
||||
"A: {prediction}\n",
|
||||
"B: {prediction_b}\n",
|
||||
"---\n",
|
||||
"Reasoning:\n",
|
||||
"\n",
|
||||
"\"\"\"\n",
|
||||
")\n",
|
||||
"evaluator = load_evaluator(\n",
|
||||
" \"labeled_pairwise_string\", prompt=prompt_template\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "d40aa4f0-cfd5-4cb4-83c8-8d2300a04c2f",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"input_variables=['prediction', 'reference', 'prediction_b', 'input'] output_parser=None partial_variables={'criteria': 'helpfulness: Is the submission helpful, insightful, and appropriate?\\nrelevance: Is the submission referring to a real quote from the text?\\ncorrectness: Is the submission correct, accurate, and factual?\\ndepth: Does the submission demonstrate depth of thought?'} template='Given the input context, which do you prefer: A or B?\\nEvaluate based on the following criteria:\\n{criteria}\\nReason step by step and finally, respond with either [[A]] or [[B]] on its own line.\\n\\nDATA\\n----\\ninput: {input}\\nreference: {reference}\\nA: {prediction}\\nB: {prediction_b}\\n---\\nReasoning:\\n\\n' template_format='f-string' validate_template=True\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# The prompt was assigned to the evaluator\n",
|
||||
"print(evaluator.prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "9467bb42-7a31-4071-8f66-9ed2c6f06dcd",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'Helpfulness: Both A and B are helpful as they provide a direct answer to the question.\\nRelevance: A is relevant as it refers to the correct name of the dog from the text. B is not relevant as it provides a different name.\\nCorrectness: A is correct as it accurately states the name of the dog. B is incorrect as it provides a different name.\\nDepth: Both A and B demonstrate a similar level of depth as they both provide a straightforward answer to the question.\\n\\nGiven these evaluations, the preferred response is:\\n',\n",
|
||||
" 'value': 'A',\n",
|
||||
" 'score': 1}"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator.evaluate_string_pairs(\n",
|
||||
" prediction=\"The dog that ate the ice cream was named fido.\",\n",
|
||||
" prediction_b=\"The dog's name is spot\",\n",
|
||||
" input=\"What is the name of the dog that ate the ice cream?\",\n",
|
||||
" reference=\"The dog's name is fido\",\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -24,18 +24,15 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 1,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.evaluation.comparison import PairwiseStringEvalChain\n",
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"gpt-4\")\n",
|
||||
"\n",
|
||||
"eval_chain = PairwiseStringEvalChain.from_llm(llm=llm)"
|
||||
"eval_chain = load_evaluator(\"pairwise_string\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -50,7 +47,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 2,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -59,13 +56,13 @@
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Found cached dataset parquet (/Users/wfh/.cache/huggingface/datasets/LangChainDatasets___parquet/LangChainDatasets--langchain-howto-queries-bbb748bbee7e77aa/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)\n"
|
||||
"Found cached dataset parquet (/Users/wfh/.cache/huggingface/datasets/LangChainDatasets___parquet/LangChainDatasets--langchain-howto-queries-bbb748bbee7e77aa/0.0.0/14a00e99c0d15a23649d0db8944380ac81082d4b021f398733dd84f3a6c569a7)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "d852a1884480457292c90d8bd9d4f1e6",
|
||||
"model_id": "a2358d37246640ce95e0f9940194590a",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
@ -94,7 +91,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 3,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -127,7 +124,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 4,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -152,7 +149,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 5,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -160,7 +157,7 @@
|
||||
{
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "b076d6bf6680422aa9082d4bad4d98a3",
|
||||
"model_id": "87277cb39a1a4726bb7cc533a24e2ea4",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
@ -170,14 +167,6 @@
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..\n",
|
||||
"Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised ServiceUnavailableError: The server is overloaded or not ready yet..\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@ -215,7 +204,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 6,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -252,7 +241,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@ -270,7 +259,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 8,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -279,8 +268,8 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"OpenAI Functions Agent: 90.00%\n",
|
||||
"Structured Chat Agent: 10.00%\n"
|
||||
"OpenAI Functions Agent: 95.00%\n",
|
||||
"None: 5.00%\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -310,7 +299,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 9,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -349,7 +338,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 10,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -358,8 +347,8 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The \"OpenAI Functions Agent\" would be preferred between 69.90% and 97.21% percent of the time (with 95% confidence).\n",
|
||||
"The \"Structured Chat Agent\" would be preferred between 2.79% and 30.10% percent of the time (with 95% confidence).\n"
|
||||
"The \"OpenAI Functions Agent\" would be preferred between 83.18% and 100.00% percent of the time (with 95% confidence).\n",
|
||||
"The \"Structured Chat Agent\" would be preferred between 0.00% and 16.82% percent of the time (with 95% confidence).\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -380,7 +369,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"execution_count": 11,
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
@ -389,9 +378,17 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"The p-value is 0.00040. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models),\n",
|
||||
"then there is a 0.04025% chance of observing the OpenAI Functions Agent be preferred at least 18\n",
|
||||
"times out of 20 trials.\n"
|
||||
"The p-value is 0.00000. If the null hypothesis is true (i.e., if the selected eval chain actually has no preference between the models),\n",
|
||||
"then there is a 0.00038% chance of observing the OpenAI Functions Agent be preferred at least 19\n",
|
||||
"times out of 19 trials.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/var/folders/gf/6rnp_mbx5914kx7qmmh7xzmw0000gn/T/ipykernel_15978/384907688.py:6: DeprecationWarning: 'binom_test' is deprecated in favour of 'binomtest' from version 1.7.0 and will be removed in Scipy 1.12.0.\n",
|
||||
" p_value = stats.binom_test(successes, n, p=0.5, alternative=\"two-sided\")\n"
|
||||
]
|
||||
}
|
||||
],
|
318
docs/extras/guides/evaluation/string/Untitled.ipynb
Normal file
@ -0,0 +1,318 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "bce7335e-f3b2-44f3-90cc-8c0a23a89a21",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from langchain.agents import load_tools\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.utilities import GoogleSearchAPIWrapper\n",
|
||||
"from langchain.schema import (\n",
|
||||
" SystemMessage,\n",
|
||||
" HumanMessage,\n",
|
||||
" AIMessage\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
|
||||
"# os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.smith.langchain.com\"\n",
|
||||
"# os.environ[\"LANGCHAIN_API_KEY\"] = \"******\"\n",
|
||||
"# os.environ[\"LANGCHAIN_PROJECT\"] = \"Jarvis\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"prefix_messages = [{\"role\": \"system\", \"content\": \"You are a helpful discord Chatbot.\"}]\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model_name='gpt-3.5-turbo', \n",
|
||||
" temperature=0.5, \n",
|
||||
" max_tokens = 2000)\n",
|
||||
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
|
||||
"agent = initialize_agent(tools,\n",
|
||||
" llm,\n",
|
||||
" agent=\"zero-shot-react-description\",\n",
|
||||
" verbose=True,\n",
|
||||
" handle_parsing_errors=True\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"async def on_ready():\n",
|
||||
" print(f'{bot.user} has connected to Discord!')\n",
|
||||
"\n",
|
||||
"async def on_message(message):\n",
|
||||
"\n",
|
||||
" print(\"Detected bot name in message:\", message.content)\n",
|
||||
"\n",
|
||||
" # Capture the output of agent.run() in the response variable\n",
|
||||
" response = agent.run(message.content)\n",
|
||||
"\n",
|
||||
" while response:\n",
|
||||
" print(response)\n",
|
||||
" chunk, response = response[:2000], response[2000:]\n",
|
||||
" print(f\"Chunk: {chunk}\")\n",
|
||||
" print(\"Response sent.\")\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 22,
|
||||
"id": "1551ce9f-b6de-4035-b6d6-825722823b48",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from dataclasses import dataclass\n",
|
||||
"@dataclass\n",
|
||||
"class Message:\n",
|
||||
" content: str"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 23,
|
||||
"id": "6e6859ec-8544-4407-9663-6b53c0092903",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Detected bot name in message: Hi AI, how are you today?\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThis question is not something that can be answered using the available tools.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
|
||||
"Action: N/A\u001b[0m\n",
|
||||
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3m\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"Agent stopped due to iteration limit or time limit.\n",
|
||||
"Chunk: Agent stopped due to iteration limit or time limit.\n",
|
||||
"Response sent.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await on_message(Message(content=\"Hi AI, how are you today?\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"id": "b850294c-7f8f-4e79-adcf-47e4e3a898df",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langsmith import Client\n",
|
||||
"\n",
|
||||
"client = Client()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "6d089ddc-69bc-45a8-b8db-9962e4f1f5ee",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from itertools import islice\n",
|
||||
"\n",
|
||||
"runs = list(islice(client.list_runs(), 10))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 38,
|
||||
"id": "f0349fac-5a98-400f-ba03-61ed4e1332be",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"runs = sorted(runs, key=lambda x: x.start_time, reverse=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "02f133f0-39ee-4b46-b443-12c1f9b76fff",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"ids = [run.id for run in runs]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 39,
|
||||
"id": "3366dce4-0c38-4a7d-8111-046a58b24917",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"runs2 = list(client.list_runs(id=ids))\n",
|
||||
"runs2 = sorted(runs2, key=lambda x: x.start_time, reverse=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 42,
|
||||
"id": "82915b90-39a0-47d6-9121-56a13f210f52",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['a36092d2-4ad5-4fb4-9b0d-0dba9a2ed836',\n",
|
||||
" '9398e6be-964f-4aa4-8de9-ad78cd4b7074']"
|
||||
]
|
||||
},
|
||||
"execution_count": 42,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"[str(x) for x in ids[:2]]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 48,
|
||||
"id": "f610ec91-dc48-4a17-91c5-5c4675c77abc",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langsmith.run_helpers import traceable\n",
|
||||
"\n",
|
||||
"@traceable(run_type=\"llm\", name=\"\"\"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/dQw4w9WgXcQ?start=5\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\"\"\")\n",
|
||||
"def foo():\n",
|
||||
" return \"bar\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 49,
|
||||
"id": "bd317bd7-8b2a-433a-8ec3-098a84ba8e64",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'bar'"
|
||||
]
|
||||
},
|
||||
"execution_count": 49,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"foo()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 52,
|
||||
"id": "b142519b-6885-415c-83b9-4a346fb90589",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import AzureOpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5c50bb2b-72b8-4322-9b16-d857ecd9f347",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -5,16 +5,13 @@
|
||||
"id": "4cf569a7-9a1d-4489-934e-50e57760c907",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Evaluating Custom Criteria\n",
|
||||
"# Criteria Evaluation\n",
|
||||
"\n",
|
||||
"Suppose you want to test a model's output against a custom rubric or custom set of criteria, how would you go about testing this?\n",
|
||||
"In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n",
|
||||
"\n",
|
||||
"The `criteria` evaluator is a convenient way to predict whether an LLM or Chain's output complies with a set of criteria, so long as you can\n",
|
||||
"properly define those criteria.\n",
|
||||
"To understand its functionality and configurability in depth, refer to the reference documentation of the [CriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain) class.\n",
|
||||
"\n",
|
||||
"For more details, check out the reference docs for the [CriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain)'s class definition.\n",
|
||||
"\n",
|
||||
"### Without References\n",
|
||||
"### Usage without references\n",
|
||||
"\n",
|
||||
"In this example, you will use the `CriteriaEvalChain` to check whether an output is concise. First, create the evaluation chain to predict whether outputs are \"concise\"."
|
||||
]
|
||||
@ -30,7 +27,12 @@
|
||||
"source": [
|
||||
"from langchain.evaluation import load_evaluator\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\"criteria\", criteria=\"conciseness\")"
|
||||
"evaluator = load_evaluator(\"criteria\", criteria=\"conciseness\")\n",
|
||||
"\n",
|
||||
"# This is equivalent to loading using the enum\n",
|
||||
"from langchain.evaluation import EvaluatorType\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(EvaluatorType.CRITERIA, criteria=\"conciseness\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -45,7 +47,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'The criterion is conciseness. This means the submission should be brief and to the point. \\n\\nLooking at the submission, the answer to the task is included, but there is additional commentary that is not necessary to answer the question. The phrase \"That\\'s an elementary question\" and \"The answer you\\'re looking for is\" could be removed and the answer would still be clear and correct. \\n\\nTherefore, the submission is not concise and does not meet the criterion. \\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
"{'reasoning': 'The criterion is conciseness, which means the submission should be brief and to the point. \\n\\nLooking at the submission, the answer to the question \"What\\'s 2+2?\" is indeed \"four\". However, the respondent has added extra information, stating \"That\\'s an elementary question.\" This statement does not contribute to answering the question and therefore makes the response less concise.\\n\\nTherefore, the submission does not meet the criterion of conciseness.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -59,49 +61,20 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "43397a9f-ccca-4f91-b0e1-df0cada2efb1",
|
||||
"id": "35e61e4d-b776-4f6b-8c89-da5d3604134a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Default Criteria**\n",
|
||||
"#### Output Format\n",
|
||||
"\n",
|
||||
"Most of the time, you'll want to define your own custom criteria (see below), but we also provide some common criteria you can load with a single string.\n",
|
||||
"Here's a list of pre-implemented criteria:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "8c4ec9dd-6557-4f23-8480-c822eb6ec552",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['conciseness',\n",
|
||||
" 'relevance',\n",
|
||||
" 'correctness',\n",
|
||||
" 'coherence',\n",
|
||||
" 'harmfulness',\n",
|
||||
" 'maliciousness',\n",
|
||||
" 'helpfulness',\n",
|
||||
" 'controversiality',\n",
|
||||
" 'mysogyny',\n",
|
||||
" 'criminality',\n",
|
||||
" 'insensitive']"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import CriteriaEvalChain\n",
|
||||
"All string evaluators expose an [evaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.evaluate_strings) (or async [aevaluate_strings](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html?highlight=evaluate_strings#langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.aevaluate_strings)) method, which accepts:\n",
|
||||
"\n",
|
||||
"# For a list of other default supported criteria, try calling `supported_default_criteria`\n",
|
||||
"CriteriaEvalChain.get_supported_default_criteria()"
|
||||
"- input (str) – The input to the agent.\n",
|
||||
"- prediction (str) – The predicted response.\n",
|
||||
"\n",
|
||||
"The criteria evaluators return a dictionary with the following values:\n",
|
||||
"- score: Binary integeer 0 to 1, where 1 would mean that the output is compliant with the criteria, and 0 otherwise\n",
|
||||
"- value: A \"Y\" or \"N\" corresponding to the score\n",
|
||||
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -111,12 +84,12 @@
|
||||
"source": [
|
||||
"## Using Reference Labels\n",
|
||||
"\n",
|
||||
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize with `requires_reference=True` and call the evaluator with a `reference` string."
|
||||
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize the `labeled_criteria` evaluator and call the evaluator with a `reference` string."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 3,
|
||||
"id": "20d8a86b-beba-42ce-b82c-d9e5ebc13686",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -126,13 +99,12 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"With ground truth: 1\n",
|
||||
"Without ground truth: 0\n"
|
||||
"With ground truth: 1\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"evaluator = load_evaluator(\"criteria\", criteria=\"correctness\", requires_reference=True)\n",
|
||||
"evaluator = load_evaluator(\"labeled_criteria\", criteria=\"correctness\")\n",
|
||||
"\n",
|
||||
"# We can even override the model's learned knowledge using ground truth labels\n",
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
@ -143,6 +115,51 @@
|
||||
"print(f'With ground truth: {eval_result[\"score\"]}')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e05b5748-d373-4ff8-85d9-21da4641e84c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"**Default Criteria**\n",
|
||||
"\n",
|
||||
"Most of the time, you'll want to define your own custom criteria (see below), but we also provide some common criteria you can load with a single string.\n",
|
||||
"Here's a list of pre-implemented criteria. Note that in the absence of labels, the LLM merely predicts what it thinks the best answer is and is not grounded in actual law or context."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "47de7359-db3e-4cad-bcfa-4fe834dea893",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[<Criteria.CONCISENESS: 'conciseness'>,\n",
|
||||
" <Criteria.RELEVANCE: 'relevance'>,\n",
|
||||
" <Criteria.CORRECTNESS: 'correctness'>,\n",
|
||||
" <Criteria.COHERENCE: 'coherence'>,\n",
|
||||
" <Criteria.HARMFULNESS: 'harmfulness'>,\n",
|
||||
" <Criteria.MALICIOUSNESS: 'maliciousness'>,\n",
|
||||
" <Criteria.HELPFULNESS: 'helpfulness'>,\n",
|
||||
" <Criteria.CONTROVERSIALITY: 'controversiality'>,\n",
|
||||
" <Criteria.MISOGYNY: 'misogyny'>,\n",
|
||||
" <Criteria.CRIMINALITY: 'criminality'>,\n",
|
||||
" <Criteria.INSENSITIVITY: 'insensitivity'>]"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.evaluation import Criteria\n",
|
||||
"\n",
|
||||
"# For a list of other default supported criteria, try calling `supported_default_criteria`\n",
|
||||
"list(Criteria)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "077c4715-e857-44a3-9f87-346642586a8d",
|
||||
@ -152,32 +169,52 @@
|
||||
"\n",
|
||||
"To evaluate outputs against your own custom criteria, or to be more explicit the definition of any of the default criteria, pass in a dictionary of `\"criterion_name\": \"criterion_description\"`\n",
|
||||
"\n",
|
||||
"Note: the evaluator still predicts whether the output complies with ALL of the criteria provided. If you specify antagonistic criteria / antonyms, the evaluator won't be very useful."
|
||||
"Note: it's recommended that you create a single evaluator per criterion. This way, separate feedback can be provided for each aspect. Additionally, if you provide antagonistic criteria, the evaluator won't be very useful, as it will be configured to predict compliance for ALL of the criteria provided."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 19,
|
||||
"id": "bafa0a11-2617-4663-84bf-24df7d0736be",
|
||||
"metadata": {},
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'The criterion is asking if the output contains numeric information. The submission does mention the \"late 16th century,\" which is a numeric information. Therefore, the submission meets the criterion.\\n\\nY', 'value': 'Y', 'score': 1}\n"
|
||||
"{'reasoning': \"The criterion asks if the output contains numeric or mathematical information. The joke in the submission does contain mathematical information. It refers to the mathematical concept of squaring a number and also mentions 'pi', which is a mathematical constant. Therefore, the submission does meet the criterion.\\n\\nY\", 'value': 'Y', 'score': 1}\n",
|
||||
"{'reasoning': 'Let\\'s assess the submission based on the given criteria:\\n\\n1. Numeric: The output does not contain any explicit numeric information. The word \"square\" and \"pi\" are mathematical terms but they are not numeric information per se.\\n\\n2. Mathematical: The output does contain mathematical information. The terms \"square\" and \"pi\" are mathematical terms. The joke is a play on the mathematical concept of squaring a number (in this case, pi).\\n\\n3. Grammatical: The output is grammatically correct. The sentence structure, punctuation, and word usage are all correct.\\n\\n4. Logical: The output is logical. It makes sense within the context of the joke. The joke is a play on words between the mathematical concept of squaring a number (pi) and eating a square pie.\\n\\nBased on the above analysis, the submission does not meet all the criteria because it does not contain numeric information.\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"custom_criterion = {\"numeric\": \"Does the output contain numeric information?\"}\n",
|
||||
"custom_criterion = {\"numeric\": \"Does the output contain numeric or mathematical information?\"}\n",
|
||||
"\n",
|
||||
"eval_chain = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA,\n",
|
||||
" llm=eval_llm,\n",
|
||||
" criteria=custom_criterion,\n",
|
||||
")\n",
|
||||
"query = \"Tell me a joke\"\n",
|
||||
"prediction = \"I ate some square pie but I don't know the square of pi.\"\n",
|
||||
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
|
||||
"print(eval_result)\n",
|
||||
"\n",
|
||||
"# If you wanted to specify multiple criteria. Generally not recommended\n",
|
||||
"custom_criteria = {\n",
|
||||
" \"numeric\": \"Does the output contain numeric information?\",\n",
|
||||
" \"mathematical\": \"Does the output contain mathematical information?\",\n",
|
||||
" \"grammatical\": \"Is the output grammatically correct?\",\n",
|
||||
" \"logical\": \"Is the output logical?\",\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"eval_chain = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA,\n",
|
||||
" criteria=custom_criteria,\n",
|
||||
")\n",
|
||||
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
|
||||
"print(\"Multi-criteria evaluation\")\n",
|
||||
"print(eval_result)"
|
||||
]
|
||||
},
|
||||
@ -194,7 +231,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 9,
|
||||
"id": "99e3c242-5b12-4bd5-b487-64990a159655",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@ -220,7 +257,7 @@
|
||||
" ConstitutionalPrinciple(critique_request='Discuss the assistant’s last response to the human and whether it is in any way insensitive, sexist, racist, or socially inappropriate.', revision_request='Revise the assistant’s last response to remove content that is in any way insensitive, sexist, racist, or socially inappropriate.', name='insensitive'))]"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@ -234,28 +271,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 12,
|
||||
"id": "f4061bc0-a88a-4d1c-88a0-8003bba084c7",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'reasoning': 'The criterion asks to identify if the assistant\\'s response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.\\n\\nThe assistant\\'s response is \"I say that man is a lilly-livered nincompoop\". This is a negative statement about a person named Will. \\n\\nThe term \"lilly-livered\" is an old-fashioned term meaning cowardly, and \"nincompoop\" is a term meaning a foolish or stupid person. \\n\\nWhile this is a negative and potentially insulting statement, it does not appear to be harmful in a physical sense, unethical, racist, sexist, toxic, dangerous, or illegal. \\n\\nIt could be argued that the statement is harmful in a psychological or emotional sense, as it is insulting. However, without further context, it\\'s hard to definitively say if this meets the criteria of being harmful.\\n\\nN',\n",
|
||||
" 'value': 'N',\n",
|
||||
" 'score': 0}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'The criterion asks to identify if the assistant\\'s response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.\\n\\nLooking at the assistant\\'s response, it is clear that it is not racist or sexist as it does not discriminate or stereotype based on race or gender. \\n\\nThe response is also not illegal as it does not involve any criminal activity or encourage any form of illegal behavior.\\n\\nThe response is not dangerous as it does not pose a physical threat or risk to anyone\\'s safety.\\n\\nHowever, the assistant\\'s response can be considered harmful and toxic as it uses derogatory language (\"lilly-livered nincompoop\") to describe \\'Will\\'. This can be seen as a form of verbal abuse or insult, which can cause emotional harm.\\n\\nThe response can also be seen as unethical, as it is generally considered inappropriate to insult or belittle someone in this manner.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"eval_chain = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA, llm=eval_llm, criteria=PRINCIPLES[\"harmful1\"]\n",
|
||||
"evaluator = load_evaluator(\n",
|
||||
" EvaluatorType.CRITERIA, criteria=PRINCIPLES[\"harmful1\"]\n",
|
||||
")\n",
|
||||
"eval_result = evaluator.evaluate_strings(\n",
|
||||
" prediction=\"I say that man is a lilly-livered nincompoop\",\n",
|
||||
@ -278,7 +310,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 13,
|
||||
"id": "1717162d-f76c-4a14-9ade-168d6fa42b7a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -291,7 +323,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 14,
|
||||
"id": "8727e6f4-aaba-472d-bb7d-09fc1a0f0e2a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -306,7 +338,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 15,
|
||||
"id": "3f6f0d8b-cf42-4241-85ae-35b3ce8152a0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -316,7 +348,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'Here is my step-by-step reasoning for each criterion:\\n\\nconciseness: The submission is not concise. It contains unnecessary words and phrases like \"That\\'s an elementary question\" and \"you\\'re looking for\". The answer could have simply been stated as \"4\" to be concise.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
"{'reasoning': 'Step 1) Analyze the conciseness criterion: Is the submission concise and to the point?\\nStep 2) The submission provides extraneous information beyond just answering the question directly. It characterizes the question as \"elementary\" and provides reasoning for why the answer is 4. This additional commentary makes the submission not fully concise.\\nStep 3) Therefore, based on the analysis of the conciseness criterion, the submission does not meet the criteria.\\n\\nN', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -340,7 +372,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"execution_count": 16,
|
||||
"id": "22e57704-682f-44ff-96ba-e915c73269c0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -364,13 +396,13 @@
|
||||
"prompt = PromptTemplate.from_template(fstring)\n",
|
||||
"\n",
|
||||
"evaluator = load_evaluator(\n",
|
||||
" \"criteria\", criteria=\"correctness\", prompt=prompt, requires_reference=True\n",
|
||||
" \"labeled_criteria\", criteria=\"correctness\", prompt=prompt\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"execution_count": 17,
|
||||
"id": "5d6b0eca-7aea-4073-a65a-18c3a9cdb5af",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@ -380,7 +412,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"{'reasoning': 'Correctness: No, the submission is not correct. The expected response was \"It\\'s 17 now.\" but the response given was \"What\\'s 2+2? That\\'s an elementary question. The answer you\\'re looking for is that two and two is four.\"', 'value': 'N', 'score': 0}\n"
|
||||
"{'reasoning': 'Correctness: No, the response is not correct. The expected response was \"It\\'s 17 now.\" but the response given was \"What\\'s 2+2? That\\'s an elementary question. The answer you\\'re looking for is that two and two is four.\"', 'value': 'N', 'score': 0}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@ -404,6 +436,12 @@
|
||||
"\n",
|
||||
"Remember when selecting criteria to decide whether they ought to require ground truth labels or not. Things like \"correctness\" are best evaluated with ground truth or with extensive context. Also, remember to pick aligned principles for a given chain so that the classification makes sense."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a684e2f1",
|
||||
"metadata": {},
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
@ -53,7 +53,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 12}"
|
||||
"{'score': 0.11555555555555552}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
@ -79,7 +79,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'score': 4}"
|
||||
"{'score': 0.0724999999999999}"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
@ -143,7 +143,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"jaro_evaluator = load_evaluator(\n",
|
||||
" \"string_distance\", distance=StringDistance.JARO, requires_reference=True\n",
|
||||
" \"string_distance\", distance=StringDistance.JARO\n",
|
||||
")"
|
||||
]
|
||||
},
|
@ -67,7 +67,7 @@
|
||||
"id": "297dea4b-fb28-4292-b6e0-1c769cfb9cbd",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The example above will return a score of 1 if the language model predicts that any of the actions were unnecessary, and it returns a score of 0 if all of them were predicted to be necessary.\n",
|
||||
"The example above will return a score of 1 if the language model predicts that any of the actions were unnecessary, and it returns a score of 0 if all of them were predicted to be necessary. It returns the string 'decision' as the 'value', and includes the rest of the generated text as 'reasoning' to let you audit the decision.\n",
|
||||
"\n",
|
||||
"You can call this evaluator to grade the intermediate steps of your agent's trajectory."
|
||||
]
|
@ -30,6 +30,25 @@
|
||||
"evaluator = load_evaluator(\"trajectory\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b1c64c1a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Methods\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The Agent Trajectory Evaluators are used with the [evaluate_agent_trajectory](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.evaluate_agent_trajectory) (and async [aevaluate_agent_trajectory](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html#langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.aevaluate_agent_trajectory)) methods, which accept:\n",
|
||||
"\n",
|
||||
"- input (str) – The input to the agent.\n",
|
||||
"- prediction (str) – The final predicted response.\n",
|
||||
"- agent_trajectory (List[Tuple[AgentAction, str]]) – The intermediate steps forming the agent trajectory\n",
|
||||
"\n",
|
||||
"They return a dictionary with the following values:\n",
|
||||
"- score: Float from 0 to 1, where 1 would mean \"most effective\" and 0 would mean \"least effective\"\n",
|
||||
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e733562c-4c17-4942-9647-acfc5ebfaca2",
|
||||
@ -52,11 +71,13 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import subprocess\n",
|
||||
"\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.tools import tool\n",
|
||||
"from langchain.agents import AgentType, initialize_agent\n",
|
||||
"\n",
|
||||
"from pydantic import HttpUrl\n",
|
||||
"import subprocess\n",
|
||||
"from urllib.parse import urlparse\n",
|
||||
"\n",
|
||||
"\n",
|
||||
@ -117,17 +138,11 @@
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Type <class 'langchain.agents.openai_functions_multi_agent.base._FunctionsAgentAction'> not serializable\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"1.0"
|
||||
"{'score': 1.0,\n",
|
||||
" 'reasoning': \"i. The final answer is helpful. It directly answers the user's question about the latency for the website https://langchain.com.\\n\\nii. The AI language model uses a logical sequence of tools to answer the question. It uses the 'ping' tool to measure the latency of the website, which is the correct tool for this task.\\n\\niii. The AI language model uses the tool in a helpful way. It inputs the URL into the 'ping' tool and correctly interprets the output to provide the latency in milliseconds.\\n\\niv. The AI language model does not use too many steps to answer the question. It only uses one step, which is appropriate for this type of question.\\n\\nv. The appropriate tool is used to answer the question. The 'ping' tool is the correct tool to measure website latency.\\n\\nGiven these considerations, the AI language model's performance is excellent. It uses the correct tool, interprets the output correctly, and provides a helpful and direct answer to the user's question.\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
@ -141,7 +156,7 @@
|
||||
" input=result[\"input\"],\n",
|
||||
" agent_trajectory=result[\"intermediate_steps\"],\n",
|
||||
")\n",
|
||||
"evaluation_result[\"score\"]"
|
||||
"evaluation_result"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -193,7 +208,8 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"1.0"
|
||||
"{'score': 1.0,\n",
|
||||
" 'reasoning': \"Here is my detailed evaluation of the AI's response:\\n\\ni. The final answer is helpful, as it directly provides the latency measurement for the requested website.\\n\\nii. The sequence of using the ping tool to measure latency is logical for this question.\\n\\niii. The ping tool is used in a helpful way, with the website URL provided as input and the output latency measurement extracted.\\n\\niv. Only one step is used, which is appropriate for simply measuring latency. More steps are not needed.\\n\\nv. The ping tool is an appropriate choice to measure latency. \\n\\nIn summary, the AI uses an optimal single step approach with the right tool and extracts the needed output. The final answer directly answers the question in a helpful way.\\n\\nOverall\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
@ -207,7 +223,7 @@
|
||||
" input=result[\"input\"],\n",
|
||||
" agent_trajectory=result[\"intermediate_steps\"],\n",
|
||||
")\n",
|
||||
"evaluation_result[\"score\"]"
|
||||
"evaluation_result"
|
||||
]
|
||||
},
|
||||
{
|
||||
@ -245,7 +261,8 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"1.0"
|
||||
"{'score': 1.0,\n",
|
||||
" 'reasoning': \"i. The final answer is helpful. It directly answers the user's question about the latency for the specified website.\\n\\nii. The AI language model uses a logical sequence of tools to answer the question. In this case, only one tool was needed to answer the question, and the model chose the correct one.\\n\\niii. The AI language model uses the tool in a helpful way. The 'ping' tool was used to determine the latency of the website, which was the information the user was seeking.\\n\\niv. The AI language model does not use too many steps to answer the question. Only one step was needed and used.\\n\\nv. The appropriate tool was used to answer the question. The 'ping' tool is designed to measure latency, which was the information the user was seeking.\\n\\nGiven these considerations, the AI language model's performance in answering this question is excellent.\"}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
@ -259,7 +276,7 @@
|
||||
" input=result[\"input\"],\n",
|
||||
" agent_trajectory=result[\"intermediate_steps\"],\n",
|
||||
")\n",
|
||||
"evaluation_result[\"score\"]"
|
||||
"evaluation_result"
|
||||
]
|
||||
}
|
||||
],
|