Compare commits

..

5 Commits

Author SHA1 Message Date
William FH
60313c3f62 Delete Untitled.ipynb 2023-07-27 12:48:50 -07:00
William Fu-Hinthorn
b9bd33ff30 pass child 2023-07-24 12:51:32 -07:00
William Fu-Hinthorn
574d752c7e thread more 2023-07-24 12:11:13 -07:00
William Fu-Hinthorn
914489a0a1 update exp 2023-07-24 11:56:11 -07:00
William Fu-Hinthorn
506da41b97 Add callbacks to memory 2023-07-24 11:49:33 -07:00
809 changed files with 14000 additions and 29676 deletions

View File

@@ -15,11 +15,7 @@ You may use the button above, or follow these steps to open this repo in a Codes
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
## VS Code Dev Containers
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
Note: If you click this link you will open the main repo and not your local cloned repo, you can use this link and replace with your username and cloned repo name:
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
@@ -29,7 +25,7 @@ You can also follow these steps to open this repo in a container using the VS Co
2. Open a locally cloned copy of the code:
- Fork and Clone this repository to your local filesystem.
- Clone this repository to your local filesystem.
- Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command.
- Select the cloned copy of this folder, wait for the container to start, and try things out!

View File

@@ -37,7 +37,6 @@ jobs:
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
- name: Create Release
uses: ncipollo/release-action@v1
if: ${{ inputs.working-directory == 'libs/langchain' }}
with:
artifacts: "dist/*"
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -20,5 +20,3 @@ jobs:
uses: actions/checkout@v3
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json

1
.gitignore vendored
View File

@@ -162,7 +162,6 @@ docs/.docusaurus/
docs/.cache-loader/
docs/_dist
docs/api_reference/api_reference.rst
docs/api_reference/experimental_api_reference.rst
docs/api_reference/_build
docs/api_reference/*/
!docs/api_reference/_static/

View File

@@ -43,10 +43,6 @@ Now:
`from langchain_experimental.sql import SQLDatabaseChain`
Alternatively, if you are just interested in using the query generation part of the SQL chain, you can check out [`create_sql_query_chain`](https://github.com/langchain-ai/langchain/blob/master/docs/extras/use_cases/tabular/sql_query.ipynb)
`from langchain.chains import create_sql_query_chain`
## `load_prompt` for Python files
Note: this only applies if you want to load Python files as prompts.

View File

@@ -12,14 +12,14 @@
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/hwchase17/langchain)
[![GitHub star chart](https://img.shields.io/github/stars/hwchase17/langchain?style=social)](https://star-history.com/#hwchase17/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/langchain-ai/langchain)](https://libraries.io/github/langchain-ai/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/hwchase17/langchain)](https://libraries.io/github/hwchase17/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/hwchase17/langchain)](https://github.com/hwchase17/langchain/issues)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
Please fill out [this form](https://6w1pwbss0py.typeform.com/to/rrbrdTH2) and we'll set up a dedicated support Slack channel.
Please fill out [this form](https://forms.gle/57d8AmXBYp8PP8tZA) and we'll set up a dedicated support Slack channel.
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28

View File

@@ -13,6 +13,5 @@ cp -r {docs_skeleton,snippets} _dist
cp -r extras/* _dist/docs_skeleton/docs
cd _dist/docs_skeleton
poetry run nbdoc_build
poetry run python generate_api_reference_links.py
yarn install
yarn start

View File

@@ -7,67 +7,20 @@
# -- Path setup --------------------------------------------------------------
import json
import os
import sys
from pathlib import Path
import toml
from docutils import nodes
from sphinx.util.docutils import SphinxDirective
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
import toml
_DIR = Path(__file__).parent.absolute()
sys.path.insert(0, os.path.abspath("."))
sys.path.insert(0, os.path.abspath("../../libs/langchain"))
sys.path.insert(0, os.path.abspath("../../libs/experimental"))
with (_DIR.parents[1] / "libs" / "langchain" / "pyproject.toml").open("r") as f:
with open("../../libs/langchain/pyproject.toml") as f:
data = toml.load(f)
with (_DIR / "guide_imports.json").open("r") as f:
imported_classes = json.load(f)
class ExampleLinksDirective(SphinxDirective):
"""Directive to generate a list of links to examples.
We have a script that extracts links to API reference docs
from our notebook examples. This directive uses that information
to backlink to the examples from the API reference docs."""
has_content = False
required_arguments = 1
def run(self):
"""Run the directive.
Called any time :example_links:`ClassName` is used
in the template *.rst files."""
class_or_func_name = self.arguments[0]
links = imported_classes.get(class_or_func_name, {})
list_node = nodes.bullet_list()
for doc_name, link in links.items():
item_node = nodes.list_item()
para_node = nodes.paragraph()
link_node = nodes.reference()
link_node["refuri"] = link
link_node.append(nodes.Text(doc_name))
para_node.append(link_node)
item_node.append(para_node)
list_node.append(item_node)
if list_node.children:
title_node = nodes.title()
title_node.append(nodes.Text(f"Examples using {class_or_func_name}"))
return [title_node, list_node]
return [list_node]
def setup(app):
app.add_directive("example_links", ExampleLinksDirective)
# -- Project information -----------------------------------------------------

View File

@@ -5,15 +5,13 @@ from pathlib import Path
ROOT_DIR = Path(__file__).parents[2].absolute()
PKG_DIR = ROOT_DIR / "libs" / "langchain" / "langchain"
EXP_DIR = ROOT_DIR / "libs" / "experimental" / "langchain_experimental"
WRITE_FILE = Path(__file__).parent / "api_reference.rst"
EXP_WRITE_FILE = Path(__file__).parent / "experimental_api_reference.rst"
def load_members(dir: Path) -> dict:
def load_members() -> dict:
members: dict = {}
for py in glob.glob(str(dir) + "/**/*.py", recursive=True):
module = py[len(str(dir)) + 1 :].replace(".py", "").replace("/", ".")
for py in glob.glob(str(PKG_DIR) + "/**/*.py", recursive=True):
module = py[len(str(PKG_DIR)) + 1 :].replace(".py", "").replace("/", ".")
top_level = module.split(".")[0]
if top_level not in members:
members[top_level] = {"classes": [], "functions": []}
@@ -28,10 +26,12 @@ def load_members(dir: Path) -> dict:
return members
def construct_doc(pkg: str, members: dict) -> str:
full_doc = f"""\
def construct_doc(members: dict) -> str:
full_doc = """\
.. _api_reference:
=============
``{pkg}`` API Reference
API Reference
=============
"""
@@ -40,12 +40,16 @@ def construct_doc(pkg: str, members: dict) -> str:
functions = _members["functions"]
if not (classes or functions):
continue
section = f":mod:`{pkg}.{module}`"
module_title = module.replace("_", " ").title()
if module_title == "Llms":
module_title = "LLMs"
section = f":mod:`langchain.{module}`: {module_title}"
full_doc += f"""\
{section}
{'=' * (len(section) + 1)}
.. automodule:: {pkg}.{module}
.. automodule:: langchain.{module}
:no-members:
:no-inherited-members:
@@ -56,7 +60,7 @@ def construct_doc(pkg: str, members: dict) -> str:
full_doc += f"""\
Classes
--------------
.. currentmodule:: {pkg}
.. currentmodule:: langchain
.. autosummary::
:toctree: {module}
@@ -70,11 +74,10 @@ Classes
full_doc += f"""\
Functions
--------------
.. currentmodule:: {pkg}
.. currentmodule:: langchain
.. autosummary::
:toctree: {module}
:template: function.rst
{fstring}
@@ -83,14 +86,10 @@ Functions
def main() -> None:
lc_members = load_members(PKG_DIR)
lc_doc = ".. _api_reference:\n\n" + construct_doc("langchain", lc_members)
members = load_members()
full_doc = construct_doc(members)
with open(WRITE_FILE, "w") as f:
f.write(lc_doc)
exp_members = load_members(EXP_DIR)
exp_doc = ".. _experimental_api_reference:\n\n" + construct_doc("langchain_experimental", exp_members)
with open(EXP_WRITE_FILE, "w") as f:
f.write(exp_doc)
f.write(full_doc)
if __name__ == "__main__":

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,9 @@
Evaluation
=======================
LangChain has a number of convenient evaluation chains you can use off the shelf to grade your models' oupputs.
.. automodule:: langchain.evaluation
:members:
:undoc-members:
:inherited-members:

View File

@@ -26,5 +26,3 @@
{%- endfor %}
{% endif %}
{% endblock %}
.. example_links:: {{ objname }}

View File

@@ -1,8 +0,0 @@
:mod:`{{module}}`.{{objname}}
{{ underline }}==============
.. currentmodule:: {{ module }}
.. autofunction:: {{ objname }}
.. example_links:: {{ objname }}

View File

@@ -45,9 +45,6 @@
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('api_reference') }}">API</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('experimental_api_reference') }}">Experimental</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" target="_blank" rel="noopener noreferrer" href="https://python.langchain.com/">Python Docs</a>
</li>

View File

@@ -745,11 +745,6 @@ span.descname {
background-color: transparent;
padding: 0;
font-family: monospace;
font-size: 1.2rem;
}
em.property {
font-weight: normal;
}
span.descclassname {

View File

@@ -1,24 +0,0 @@
---
sidebar_position: 3
---
# Comparison Evaluators
Comparison evaluators in LangChain help measure two different chain or LLM outputs. These evaluators are helpful for comparative analyses, such as A/B testing between two language models, or comparing different versions of the same model. They can also be useful for things like generating preference scores for ai-assisted reinforcement learning.
These evaluators inherit from the `PairwiseStringEvaluator` class, providing a comparison interface for two strings - typically, the outputs from two different prompts or models, or two versions of the same model. In essence, a comparison evaluator performs an evaluation on a pair of strings and returns a dictionary containing the evaluation score and other relevant details.
To create a custom comparison evaluator, inherit from the `PairwiseStringEvaluator` class and overwrite the `_evaluate_string_pairs` method. If you require asynchronous evaluation, also overwrite the `_aevaluate_string_pairs` method.
Here's a summary of the key methods and properties of a comparison evaluator:
- `evaluate_string_pairs`: Evaluate the output string pairs. This function should be overwritten when creating custom evaluators.
- `aevaluate_string_pairs`: Asynchronously evaluate the output string pairs. This function should be overwritten for asynchronous evaluation.
- `requires_input`: This property indicates whether this evaluator requires an input string.
- `requires_reference`: This property specifies whether this evaluator requires a reference label.
Detailed information about creating custom evaluators and the available built-in comparison evaluators are provided in the following sections.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -1,31 +0,0 @@
---
sidebar_position: 6
---
import DocCardList from "@theme/DocCardList";
# Evaluation
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.
The guides in this section review the APIs and functionality LangChain provides to help yous better evaluate your applications. Evaluation and testing are both critical when thinking about deploying LLM applications, since production environments require repeatable and useful outcomes.
LangChain offers various types of evaluators to help you measure performance and integrity on diverse data, and we hope to encourage the the community to create and share other useful evaluators so everyone can improve. These docs will introduce the evaluator types, how to use them, and provide some examples of their use in real-world scenarios.
Each evaluator type in LangChain comes with ready-to-use implementations and an extensible API that allows for customization according to your unique requirements. Here are some of the types of evaluators we offer:
- [String Evaluators](/docs/guides/evaluation/string/): These evaluators assess the predicted string for a given input, usually comparing it against a reference string.
- [Trajectory Evaluators](/docs/guides/evaluation/trajectory/): These are used to evaluate the entire trajectory of agent actions.
- [Comparison Evaluators](/docs/guides/evaluation/comparison/): These evaluators are designed to compare predictions from two runs on a common input.
These evaluators can be used across various scenarios and can be applied to different chain and LLM implementations in the LangChain library.
We also are working to share guides and cookbooks that demonstrate how to use these evaluators in real-world scenarios, such as:
- [Chain Comparisons](/docs/guides/evaluation/examples/comparisons): This example uses a comparison evaluator to predict the preferred output. It reviews ways to measure confidence intervals to select statistically significant differences in aggregate preference scores across different models or prompts.
## Reference Docs
For detailed information on the available evaluators, including how to instantiate, configure, and customize them, check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.
<DocCardList />

View File

@@ -1,27 +0,0 @@
---
sidebar_position: 2
---
# String Evaluators
A string evaluator is a component within LangChain designed to assess the performance of a language model by comparing its generated outputs (predictions) to a reference string or an input. This comparison is a crucial step in the evaluation of language models, providing a measure of the accuracy or quality of the generated text.
In practice, string evaluators are typically used to evaluate a predicted string against a given input, such as a question or a prompt. Often, a reference label or context string is provided to define what a correct or ideal response would look like. These evaluators can be customized to tailor the evaluation process to fit your application's specific requirements.
To create a custom string evaluator, inherit from the `StringEvaluator` class and implement the `_evaluate_strings` method. If you require asynchronous support, also implement the `_aevaluate_strings` method.
Here's a summary of the key attributes and methods associated with a string evaluator:
- `evaluation_name`: Specifies the name of the evaluation.
- `requires_input`: Boolean attribute that indicates whether the evaluator requires an input string. If True, the evaluator will raise an error when the input isn't provided. If False, a warning will be logged if an input _is_ provided, indicating that it will not be considered in the evaluation.
- `requires_reference`: Boolean attribute specifying whether the evaluator requires a reference label. If True, the evaluator will raise an error when the reference isn't provided. If False, a warning will be logged if a reference _is_ provided, indicating that it will not be considered in the evaluation.
String evaluators also implement the following methods:
- `aevaluate_strings`: Asynchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
- `evaluate_strings`: Synchronously evaluates the output of the Chain or Language Model, with support for optional input and label.
The following sections provide detailed information on available string evaluator implementations as well as how to create a custom string evaluator.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -1,28 +0,0 @@
---
sidebar_position: 4
---
# Trajectory Evaluators
Trajectory Evaluators in LangChain provide a more holistic approach to evaluating an agent. These evaluators assess the full sequence of actions taken by an agent and their corresponding responses, which we refer to as the "trajectory". This allows you to better measure an agent's effectiveness and capabilities.
A Trajectory Evaluator implements the `AgentTrajectoryEvaluator` interface, which requires two main methods:
- `evaluate_agent_trajectory`: This method synchronously evaluates an agent's trajectory.
- `aevaluate_agent_trajectory`: This asynchronous counterpart allows evaluations to be run in parallel for efficiency.
Both methods accept three main parameters:
- `input`: The initial input given to the agent.
- `prediction`: The final predicted response from the agent.
- `agent_trajectory`: The intermediate steps taken by the agent, given as a list of tuples.
These methods return a dictionary. It is recommended that custom implementations return a `score` (a float indicating the effectiveness of the agent) and `reasoning` (a string explaining the reasoning behind the score).
You can capture an agent's trajectory by initializing the agent with the `return_intermediate_steps=True` parameter. This lets you collect all intermediate steps without relying on special callbacks.
For a deeper dive into the implementation and use of Trajectory Evaluators, refer to the sections below.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -1,9 +0,0 @@
# LangChain Expression Language
import DocCardList from "@theme/DocCardList";
LangChain Expression Language is a declarative way to easily compose chains together.
Any chain constructed this way will automatically have full sync, async, and streaming support.
See guides below for how to interact with chains constructed this way as well as cookbook examples.
<DocCardList />

View File

@@ -1,6 +0,0 @@
# Preventing harmful outputs
One of the key concerns with using LLMs is that they may generate harmful or unethical text. This is an area of active research in the field. Here we present some built-in chains inspired by this research, which are intended to make the outputs of LLMs safer.
- [Moderation chain](/docs/use_cases/safety/moderation): Explicitly check if any output text is harmful and flag it.
- [Constitutional chain](/docs/use_cases/safety/constitutional_chain): Prompt the model with a set of principles which should guide it's behavior.

View File

@@ -0,0 +1,8 @@
---
sidebar_position: 4
---
# Additional
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -0,0 +1,7 @@
# Dynamically selecting from multiple prompts
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the `MultiPromptChain` to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.
import Example from "@snippets/modules/chains/additional/multi_prompt_router.mdx"
<Example/>

View File

@@ -1,4 +1,4 @@
# Dynamically select from multiple retrievers
# Dynamically selecting from multiple retrievers
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the `MultiRetrievalQAChain` to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.

View File

@@ -1,4 +1,4 @@
# QA over in-memory documents
# Document QA
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](/docs/modules/chains/document/).

View File

@@ -1,6 +1,6 @@
# Sequential
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.

View File

@@ -2,7 +2,7 @@
sidebar_position: 2
---
# Store and reference chat history
# Conversational Retrieval QA
The ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.
It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question answering chain to return a response.

View File

@@ -0,0 +1,8 @@
---
sidebar_position: 3
---
# Popular
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -1,7 +1,7 @@
---
sidebar_position: 1
---
# QA using a Retriever
# Retrieval QA
This example showcases question answering over an index.

View File

@@ -0,0 +1 @@
label: 'Integrations'

View File

@@ -0,0 +1,8 @@
---
sidebar_position: 3
---
# Comparison Evaluators
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -0,0 +1,28 @@
---
sidebar_position: 6
---
import DocCardList from "@theme/DocCardList";
# Evaluation
Language models can be unpredictable. This makes it challenging to ship reliable applications to production, where repeatable, useful outcomes across diverse inputs are a minimum requirement. Tests help demonstrate each component in an LLM application can produce the required or expected functionality. These tests also safeguard against regressions while you improve interconnected pieces of an integrated system. However, measuring the quality of generated text can be challenging. It can be hard to agree on the right set of metrics for your application, and it can be difficult to translate those into better performance. Furthermore, it's common to lack sufficient evaluation data to adequately test the range of inputs and expected outputs for each component when you're just getting started. The LangChain community is building open source tools and guides to help address these challenges.
LangChain exposes different types of evaluators for common types of evaluation. Each type has off-the-shelf implementations you can use to get started, as well as an
extensible API so you can create your own or contribute improvements for everyone to use. The following sections have example notebooks for you to get started.
- [String Evaluators](/docs/modules/evaluation/string/): Evaluate the predicted string for a given input, usually against a reference string
- [Trajectory Evaluators](/docs/modules/evaluation/trajectory/): Evaluate the whole trajectory of agent actions
- [Comparison Evaluators](/docs/modules/evaluation/comparison/): Compare predictions from two runs on a common input
This section also provides some additional examples of how you could use these evaluators for different scenarios or apply to different chain implementations in the LangChain library. Some examples include:
- [Preference Scoring Chain Outputs](/docs/modules/evaluation/examples/comparisons): An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scores
## Reference Docs
For detailed information of the available evaluators, including how to instantiate, configure, and customize them. Check out the [reference documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.evaluation) directly.
<DocCardList />

View File

@@ -0,0 +1,8 @@
---
sidebar_position: 2
---
# String Evaluators
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -0,0 +1,8 @@
---
sidebar_position: 4
---
# Trajectory Evaluators
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -4,6 +4,6 @@ This notebook shows how to use `ConversationBufferMemory`. This memory allows fo
We can first extract it as a string.
import Example from "@snippets/modules/memory/types/buffer.mdx"
import Example from "@snippets/modules/memory/how_to/buffer.mdx"
<Example/>

View File

@@ -4,6 +4,6 @@
Let's first explore the basic functionality of this type of memory.
import Example from "@snippets/modules/memory/types/buffer_window.mdx"
import Example from "@snippets/modules/memory/how_to/buffer_window.mdx"
<Example/>

View File

@@ -1,17 +0,0 @@
---
sidebar_position: 1
---
# Chat Messages
:::info
Head to [Integrations](/docs/integrations/memory/) for documentation on built-in memory integrations with 3rd-party databases and tools.
:::
One of the core utility classes underpinning most (if not all) memory modules is the `ChatMessageHistory` class.
This is a super lightweight wrapper which exposes convenience methods for saving Human messages, AI messages, and then fetching them all.
You may want to use this class directly if you are managing memory outside of a chain.
import GetStarted from "@snippets/modules/memory/chat_messages/get_started.mdx"
<GetStarted/>

View File

@@ -4,6 +4,6 @@ Entity Memory remembers given facts about specific entities in a conversation. I
Let's first walk through using this functionality.
import Example from "@snippets/modules/memory/types/entity_summary_memory.mdx"
import Example from "@snippets/modules/memory/how_to/entity_summary_memory.mdx"
<Example/>

View File

@@ -1,62 +1,34 @@
---
sidebar_position: 3
---
# Memory
Most LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation.
At bare minimum, a conversational system should be able to access some window of past messages directly.
A more complex system will need to have a world model that it is constantly updating, which allows it to do things like maintain information about entities and their relationships.
🚧 _Docs under construction_ 🚧
We call this ability to store information about past interactions "memory".
LangChain provides a lot of utilities for adding memory to a system.
These utilities can be used by themselves or incorporated seamlessly into a chain.
:::info
Head to [Integrations](/docs/integrations/memory/) for documentation on built-in memory integrations with 3rd-party tools.
:::
A memory system needs to support two basic actions: reading and writing.
Recall that every chain defines some core execution logic that expects certain inputs.
Some of these inputs come directly from the user, but some of these inputs can come from memory.
A chain will interact with its memory system twice in a given run.
1. AFTER receiving the initial user inputs but BEFORE executing the core logic, a chain will READ from its memory system and augment the user inputs.
2. AFTER executing the core logic but BEFORE returning the answer, a chain will WRITE the inputs and outputs of the current run to memory, so that they can be referred to in future runs.
By default, Chains and Agents are stateless,
meaning that they treat each incoming query independently (like the underlying LLMs and chat models themselves).
In some applications, like chatbots, it is essential
to remember previous interactions, both in the short and long-term.
The **Memory** class does exactly that.
![memory-diagram](/img/memory_diagram.png)
## Building memory into a system
The two core design decisions in any memory system are:
- How state is stored
- How state is queried
### Storing: List of chat messages
Underlying any memory is a history of all chat interactions.
Even if these are not all used directly, they need to be stored in some form.
One of the key parts of the LangChain memory module is a series of integrations for storing these chat messages,
from in-memory lists to persistent databases.
- [Chat message storage](/docs/modules/memory/chat_messages/): How to work with Chat Messages, and the various integrations offered
### Querying: Data structures and algorithms on top of chat messages
Keeping a list of chat messages is fairly straight-forward.
What is less straight-forward are the data structures and algorithms built on top of chat messages that serve a view of those messages that is most useful.
A very simply memory system might just return the most recent messages each run. A slightly more complex memory system might return a succinct summary of the past K messages.
An even more sophisticated system might extract entities from stored messages and only return information about entities referenced in the current run.
Each application can have different requirements for how memory is queried. The memory module should make it easy to both get started with simple memory systems and write your own custom systems if needed.
- [Memory types](/docs/modules/memory/types/): The various data structures and algorithms that make up the memory types LangChain supports
LangChain provides memory components in two forms.
First, LangChain provides helper utilities for managing and manipulating previous chat messages.
These are designed to be modular and useful regardless of how they are used.
Secondly, LangChain provides easy ways to incorporate these utilities into chains.
## Get started
Let's take a look at what Memory actually looks like in LangChain.
Here we'll cover the basics of interacting with an arbitrary memory class.
Memory involves keeping a concept of state around throughout a user's interactions with an language model. A user's interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type.
In general, for each type of memory there are two ways to understanding using memory. These are the standalone functions which extract information from a sequence of messages, and then there is the way you can use this type of memory in a chain.
Memory can return multiple pieces of information (for example, the most recent N messages and a summary of all previous messages). The returned information can either be a string or a list of messages.
import GetStarted from "@snippets/modules/memory/get_started.mdx"
<GetStarted/>
## Next steps
And that's it for getting started!
Please see the other sections for walkthroughs of more advanced topics,
like custom memory, multiple memories, and more.

View File

@@ -4,6 +4,6 @@ Conversation summary memory summarizes the conversation as it happens and stores
Let's first explore the basic functionality of this type of memory.
import Example from "@snippets/modules/memory/types/summary.mdx"
import Example from "@snippets/modules/memory/how_to/summary.mdx"
<Example/>

View File

@@ -1,8 +0,0 @@
---
sidebar_position: 2
---
# Memory Types
There are many different types of memory.
Each have their own parameters, their own return types, and are useful in different scenarios.
Please see their individual page for more detail on each one.

View File

@@ -6,6 +6,6 @@ This differs from most of the other Memory classes in that it doesn't explicitly
In this case, the "docs" are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation.
import Example from "@snippets/modules/memory/types/vectorstore_retriever_memory.mdx"
import Example from "@snippets/modules/memory/how_to/vectorstore_retriever_memory.mdx"
<Example/>

View File

@@ -8,7 +8,7 @@ Head to [Integrations](/docs/integrations/llms/) for documentation on built-in i
:::
Large Language Models (LLMs) are a core component of LangChain.
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.
LangChain does not serve it's own LLMs, but rather provides a standard interface for interacting with many different LLMs.
## Get started

View File

@@ -1,183 +0,0 @@
import importlib
import inspect
import json
import logging
import os
import re
from pathlib import Path
import argparse
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Base URL for all class documentation
_BASE_URL = "https://api.python.langchain.com/en/latest/"
# Regular expression to match Python code blocks
code_block_re = re.compile(r"^(```python\n)(.*?)(```\n)", re.DOTALL | re.MULTILINE)
# Regular expression to match langchain import lines
_IMPORT_RE = re.compile(
r"from\s+(langchain\.\w+(\.\w+)*?)\s+import\s+"
r"((?:\w+(?:,\s*)?)*" # Match zero or more words separated by a comma+optional ws
r"(?:\s*\(.*?\))?)", # Match optional parentheses block
re.DOTALL, # Match newlines as well
)
_CURRENT_PATH = Path(__file__).parent.absolute()
# Directory where generated markdown files are stored
_DOCS_DIR = _CURRENT_PATH / "docs"
_JSON_PATH = _CURRENT_PATH.parent / "api_reference" / "guide_imports.json"
def find_files(path):
"""Find all MDX files in the given path"""
# Check if is file first
if os.path.isfile(path):
yield path
return
for root, _, files in os.walk(path):
for file in files:
if file.endswith(".mdx") or file.endswith(".md"):
yield os.path.join(root, file)
def get_full_module_name(module_path, class_name):
"""Get full module name using inspect"""
module = importlib.import_module(module_path)
class_ = getattr(module, class_name)
return inspect.getmodule(class_).__name__
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument(
"--docs_dir",
type=str,
default=_DOCS_DIR,
help="Directory where generated markdown files are stored",
)
return parser.parse_args()
def main():
"""Main function"""
args = get_args()
global_imports = {}
for file in find_files(args.docs_dir):
print(f"Adding links for imports in {file}")
file_imports = replace_imports(file)
if file_imports:
# Use relative file path as key
relative_path = (
os.path.relpath(file, _DOCS_DIR).replace(".mdx", "").replace(".md", "")
)
doc_url = f"https://python.langchain.com/docs/{relative_path}"
for import_info in file_imports:
doc_title = import_info["title"]
class_name = import_info["imported"]
if class_name not in global_imports:
global_imports[class_name] = {}
global_imports[class_name][doc_title] = doc_url
# Write the global imports information to a JSON file
_JSON_PATH.parent.mkdir(parents=True, exist_ok=True)
with _JSON_PATH.open("w") as f:
json.dump(global_imports, f)
def _get_doc_title(data: str, file_name: str) -> str:
try:
return re.findall(r"^#\s+(.*)", data, re.MULTILINE)[0]
except IndexError:
pass
# Parse the rst-style titles
try:
return re.findall(r"^(.*)\n=+\n", data, re.MULTILINE)[0]
except IndexError:
return file_name
def replace_imports(file):
"""Replace imports in each Python code block with links to their
documentation and append the import info in a comment"""
all_imports = []
with open(file, "r") as f:
data = f.read()
file_name = os.path.basename(file)
_DOC_TITLE = _get_doc_title(data, file_name)
def replacer(match):
# Extract the code block content
code = match.group(2)
# Replace if any import comment exists
# TODO: Use our own custom <code> component rather than this
# injection method
existing_comment_re = re.compile(r"^<!--IMPORTS:.*?-->\n", re.MULTILINE)
code = existing_comment_re.sub("", code)
# Process imports in the code block
imports = []
for import_match in _IMPORT_RE.finditer(code):
module = import_match.group(1)
imports_str = (
import_match.group(3).replace("(\n", "").replace("\n)", "")
) # Handle newlines within parentheses
# remove any newline and spaces, then split by comma
imported_classes = [
imp.strip()
for imp in re.split(r",\s*", imports_str.replace("\n", ""))
if imp.strip()
]
for class_name in imported_classes:
try:
module_path = get_full_module_name(module, class_name)
except AttributeError as e:
logger.warning(f"Could not find module for {class_name}, {e}")
continue
except ImportError as e:
logger.warning(f"Failed to load for class {class_name}, {e}")
continue
url = (
_BASE_URL
+ module_path.split(".")[1]
+ "/"
+ module_path
+ "."
+ class_name
+ ".html"
)
# Add the import information to our list
imports.append(
{
"imported": class_name,
"source": module,
"docs": url,
"title": _DOC_TITLE,
}
)
if imports:
all_imports.extend(imports)
# Create a unique comment containing the import information
import_comment = f"<!--IMPORTS:{json.dumps(imports)}-->"
# Inject the import comment at the start of the code block
return match.group(1) + import_comment + "\n" + code + match.group(3)
else:
# If there are no imports, return the original match
return match.group(0)
# Use re.sub to replace each Python code block
data = code_block_re.sub(replacer, data)
with open(file, "w") as f:
f.write(data)
return all_imports
if __name__ == "__main__":
main()

View File

@@ -21,7 +21,7 @@ function Imports({ imports }) {
</h4>
<ul style={{ paddingBottom: "1rem" }}>
{imports.map(({ imported, source, docs }) => (
<li key={imported}>
<li>
<a href={docs}>
<span>{imported}</span>
</a>{" "}
@@ -34,25 +34,14 @@ function Imports({ imports }) {
}
export default function CodeBlockWrapper({ children, ...props }) {
// Initialize imports as an empty array
let imports = [];
// Check if children is a string
if (typeof children === "string") {
// Search for an IMPORTS comment in the code
const match = /<!--IMPORTS:(.*?)-->\n/.exec(children);
if (match) {
imports = JSON.parse(match[1]);
children = children.replace(match[0], "");
}
} else if (children.imports) {
imports = children.imports;
return <CodeBlock {...props}>{children}</CodeBlock>;
}
return (
<>
<CodeBlock {...props}>{children}</CodeBlock>
{imports.length > 0 && <Imports imports={imports} />}
<CodeBlock {...props}>{children.content}</CodeBlock>
<Imports imports={children.imports} />
</>
);
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

View File

@@ -1078,7 +1078,7 @@
},
{
"source": "/docs/ecosystem/integrations/",
"destination": "/docs/integrations/"
"destination": "/docs/integrations/providers/"
},
{
"source": "/docs/ecosystem/integrations/:path*",
@@ -1610,59 +1610,59 @@
},
{
"source": "/en/latest/modules/chains/examples/flare.html",
"destination": "/docs/use_cases/question_answering/how_to/flare"
"destination": "/docs/modules/chains/additional/flare"
},
{
"source": "/en/latest/modules/chains/examples/graph_cypher_qa.html",
"destination": "/docs/use_cases/graph/graph_cypher_qa"
"destination": "/docs/modules/chains/additional/graph_cypher_qa"
},
{
"source": "/en/latest/modules/chains/examples/graph_nebula_qa.html",
"destination": "/docs/use_cases/graph/graph_nebula_qa"
"destination": "/docs/modules/chains/additional/graph_nebula_qa"
},
{
"source": "/en/latest/modules/chains/index_examples/graph_qa.html",
"destination": "/docs/use_cases/graph/graph_qa"
"destination": "/docs/modules/chains/additional/graph_qa"
},
{
"source": "/en/latest/modules/chains/index_examples/hyde.html",
"destination": "/docs/use_cases/question_answering/how_to/hyde"
"destination": "/docs/modules/chains/additional/hyde"
},
{
"source": "/en/latest/modules/chains/examples/llm_bash.html",
"destination": "/docs/use_cases/code_writing/llm_bash"
"destination": "/docs/modules/chains/additional/llm_bash"
},
{
"source": "/en/latest/modules/chains/examples/llm_checker.html",
"destination": "/docs/use_cases/self_check/llm_checker"
"destination": "/docs/modules/chains/additional/llm_checker"
},
{
"source": "/en/latest/modules/chains/examples/llm_math.html",
"destination": "/docs/use_cases/code_writing/llm_math"
"destination": "/docs/modules/chains/additional/llm_math"
},
{
"source": "/en/latest/modules/chains/examples/llm_requests.html",
"destination": "/docs/use_cases/apis/llm_requests"
"destination": "/docs/modules/chains/additional/llm_requests"
},
{
"source": "/en/latest/modules/chains/examples/llm_summarization_checker.html",
"destination": "/docs/use_cases/self_check/llm_summarization_checker"
"destination": "/docs/modules/chains/additional/llm_summarization_checker"
},
{
"source": "/en/latest/modules/chains/examples/openapi.html",
"destination": "/docs/use_cases/apis/openapi"
"destination": "/docs/modules/chains/additional/openapi"
},
{
"source": "/en/latest/modules/chains/examples/pal.html",
"destination": "/docs/use_cases/code_writing/pal"
"destination": "/docs/modules/chains/additional/pal"
},
{
"source": "/en/latest/modules/chains/examples/tagging.html",
"destination": "/docs/use_cases/tagging"
"destination": "/docs/modules/chains/additional/tagging"
},
{
"source": "/en/latest/modules/chains/index_examples/vector_db_text_generation.html",
"destination": "/docs/use_cases/question_answering/how_to/vector_db_text_generation"
"destination": "/docs/modules/chains/additional/vector_db_text_generation"
},
{
"source": "/en/latest/modules/chains/generic/router.html",
@@ -3448,10 +3448,6 @@
"source": "/docs/modules/model_io/models/llms/integrations/writer",
"destination": "/docs/integrations/llms/writer"
},
{
"source": "/en/latest/modules/prompts.html",
"destination": "/docs/modules/model_io/prompts"
},
{
"source": "/en/latest/modules/prompts/output_parsers.html",
"destination": "/docs/modules/model_io/output_parsers/"
@@ -3476,10 +3472,6 @@
"source": "/en/latest/modules/prompts/output_parsers/examples/retry.html",
"destination": "/docs/modules/model_io/output_parsers/retry"
},
{
"source": "/en/latest/modules/prompts/example_selectors.html",
"destination": "/docs/modules/model_io/prompts/example_selectors"
},
{
"source": "/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html",
"destination": "/docs/modules/model_io/prompts/example_selectors/custom_example_selector"
@@ -3492,10 +3484,6 @@
"source": "/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html",
"destination": "/docs/modules/model_io/prompts/example_selectors/ngram_overlap"
},
{
"source": "/en/latest/modules/prompts/prompt_templates.html",
"destination": "/docs/modules/model_io/prompts/prompt_templates"
},
{
"source": "/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html",
"destination": "/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store"
@@ -3740,18 +3728,6 @@
"source": "/docs/modules/model_io/models/chat/integrations/:path*",
"destination": "/docs/integrations/chat/:path*"
},
{
"source": "/docs/modules/evaluation(/?)",
"destination": "/docs/guides/evaluation"
},
{
"source": "/docs/modules/evaluation/:path*(/?)",
"destination": "/docs/guides/evaluation/:path*"
},
{
"source": "/en/latest/modules/indexes.html",
"destination": "/docs/modules/data_connection"
},
{
"source": "/en/latest/modules/indexes/:path*",
"destination": "/docs/modules/data_connection/:path*"
@@ -3787,174 +3763,6 @@
{
"source": "/en/latest/:path*",
"destination": "/docs/:path*"
},
{
"source": "/docs/modules/chains/additional/constitutional_chain",
"destination": "/docs/guides/safety/constitutional_chain"
},
{
"source": "/docs/modules/chains/additional/moderation",
"destination": "/docs/guides/safety/moderation"
},
{
"source": "/docs/modules/chains/popular/api",
"destination": "/docs/use_cases/apis/api"
},
{
"source": "/docs/modules/chains/additional/analyze_document",
"destination": "/docs/use_cases/question_answering/how_to/analyze_document"
},
{
"source": "/docs/modules/chains/popular/chat_vector_db",
"destination": "/docs/use_cases/question_answering/how_to/chat_vector_db"
},
{
"source": "/docs/modules/chains/additional/multi_retrieval_qa_router",
"destination": "/docs/use_cases/question_answering/how_to/multi_retrieval_qa_router"
},
{
"source": "/docs/modules/chains/additional/question_answering",
"destination": "/docs/use_cases/question_answering/how_to/question_answering"
},
{
"source": "/docs/modules/chains/popular/vector_db_qa",
"destination": "/docs/use_cases/question_answering/how_to/vector_db_qa"
},
{
"source": "/docs/modules/chains/popular/summarize",
"destination": "/docs/use_cases/summarization/summarize"
},
{
"source": "/docs/modules/chains/popular/sqlite",
"destination": "/docs/use_cases/tabular/sqlite"
},
{
"source": "/docs/modules/chains/popular/openai_functions",
"destination": "/docs/modules/chains/how_to/openai_functions"
},
{
"source": "/docs/modules/chains/additional/llm_requests",
"destination": "/docs/use_cases/apis/llm_requests"
},
{
"source": "/docs/modules/chains/additional/openai_openapi",
"destination": "/docs/use_cases/apis/openai_openapi"
},
{
"source": "/docs/modules/chains/additional/openapi",
"destination": "/docs/use_cases/apis/openapi"
},
{
"source": "/docs/modules/chains/additional/openapi_openai",
"destination": "/docs/use_cases/apis/openapi_openai"
},
{
"source": "/docs/modules/chains/additional/cpal",
"destination": "/docs/use_cases/code_writing/cpal"
},
{
"source": "/docs/modules/chains/additional/llm_bash",
"destination": "/docs/use_cases/code_writing/llm_bash"
},
{
"source": "/docs/modules/chains/additional/llm_math",
"destination": "/docs/use_cases/code_writing/llm_math"
},
{
"source": "/docs/modules/chains/additional/llm_symbolic_math",
"destination": "/docs/use_cases/code_writing/llm_symbolic_math"
},
{
"source": "/docs/modules/chains/additional/pal",
"destination": "/docs/use_cases/code_writing/pal"
},
{
"source": "/docs/modules/chains/additional/graph_arangodb_qa",
"destination": "/docs/use_cases/graph/graph_arangodb_qa"
},
{
"source": "/docs/modules/chains/additional/graph_cypher_qa",
"destination": "/docs/use_cases/graph/graph_cypher_qa"
},
{
"source": "/docs/modules/chains/additional/graph_hugegraph_qa",
"destination": "/docs/use_cases/graph/graph_hugegraph_qa"
},
{
"source": "/docs/modules/chains/additional/graph_kuzu_qa",
"destination": "/docs/use_cases/graph/graph_kuzu_qa"
},
{
"source": "/docs/modules/chains/additional/graph_nebula_qa",
"destination": "/docs/use_cases/graph/graph_nebula_qa"
},
{
"source": "/docs/modules/chains/additional/graph_qa",
"destination": "/docs/use_cases/graph/graph_qa"
},
{
"source": "/docs/modules/chains/additional/graph_sparql_qa",
"destination": "/docs/use_cases/graph/graph_sparql_qa"
},
{
"source": "/docs/modules/chains/additional/neptune_cypher_qa",
"destination": "/docs/use_cases/graph/neptune_cypher_qa"
},
{
"source": "/docs/modules/chains/additional/tot",
"destination": "/docs/use_cases/graph/tot"
},
{
"source": "/docs/use_cases/question_answering//document-context-aware-QA",
"destination": "/docs/use_cases/question_answering/how_to/document-context-aware-QA"
},
{
"source": "/docs/modules/chains/additional/flare",
"destination": "/docs/use_cases/question_answering/how_to/flare"
},
{
"source": "/docs/modules/chains/additional/hyde",
"destination": "/docs/use_cases/question_answering/how_to/hyde"
},
{
"source": "/docs/use_cases/question_answering//local_retrieval_qa",
"destination": "/docs/use_cases/question_answering/how_to/local_retrieval_qa"
},
{
"source": "/docs/modules/chains/additional/qa_citations",
"destination": "/docs/use_cases/question_answering/how_to/qa_citations"
},
{
"source": "/docs/modules/chains/additional/vector_db_text_generation",
"destination": "/docs/use_cases/question_answering/how_to/vector_db_text_generation"
},
{
"source": "/docs/modules/chains/additional/openai_functions_retrieval_qa",
"destination": "/docs/use_cases/question_answering/integrations/openai_functions_retrieval_qa"
},
{
"source": "/docs/use_cases/question_answering//semantic-search-over-chat",
"destination": "/docs/use_cases/question_answering/integrations/semantic-search-over-chat"
},
{
"source": "/docs/modules/chains/additional/llm_checker",
"destination": "/docs/use_cases/self_check/llm_checker"
},
{
"source": "/docs/modules/chains/additional/llm_summarization_checker",
"destination": "/docs/use_cases/self_check/llm_summarization_checker"
},
{
"source": "/docs/modules/chains/additional/elasticsearch_database",
"destination": "/docs/use_cases/tabular/elasticsearch_database"
},
{
"source": "/docs/modules/chains/additional/tagging",
"destination": "/docs/use_cases/tagging"
},
{
"source": "docs/integrations/providers/agent_with_wandb_tracing",
"destination": "docs/integrations/providers/wandb_tracing"
}
]
}

View File

@@ -1,53 +1,10 @@
#!/bin/bash
version_compare() {
local v1=(${1//./ })
local v2=(${2//./ })
for i in {0..2}; do
if (( ${v1[i]} < ${v2[i]} )); then
return 1
fi
done
return 0
}
openssl_version=$(openssl version | awk '{print $2}')
required_openssl_version="1.1.1"
python_version=$(python3 --version 2>&1 | awk '{print $2}')
required_python_version="3.10"
echo "OpenSSL Version"
echo $openssl_version
echo "Python Version"
echo $python_version
# If openssl version is less than 1.1.1 AND python version is less than 3.10
if ! version_compare $openssl_version $required_openssl_version && ! version_compare $python_version $required_python_version; then
### See: https://github.com/urllib3/urllib3/issues/2168
# Requests lib breaks for old SSL versions,
# which are defaults on Amazon Linux 2 (which Vercel uses for builds)
yum -y update
yum remove openssl-devel -y
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar -y
yum install openssl11 -y
yum install openssl11-devel -y
wget https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz
tar xzf Python-3.11.4.tgz
cd Python-3.11.4
./configure
make altinstall
echo "Python Version"
python3.11 --version
cd ..
fi
cd ..
python3.11 -m venv .venv
python3 --version
python3 -m venv .venv
source .venv/bin/activate
python3.11 -m pip install --upgrade pip
python3.11 -m pip install -r vercel_requirements.txt
python3 -m pip install -r vercel_requirements.txt
cp -r extras/* docs_skeleton/docs
cd docs_skeleton
nbdoc_build
python3.11 generate_api_reference_links.py

View File

@@ -31,7 +31,7 @@ There isn't any special setup for it.
## LLM
See a [usage example](/docs/integrations/llms/INCLUDE_REAL_NAME).
See a [usage example](/docs/modules/model_io/models/llms/integrations/INCLUDE_REAL_NAME.html).
```python
from langchain.llms import integration_class_REPLACE_ME
@@ -40,7 +40,7 @@ from langchain.llms import integration_class_REPLACE_ME
## Text Embedding Models
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME)
See a [usage example](/docs/modules/data_connection/text_embedding/integrations/INCLUDE_REAL_NAME.html)
```python
from langchain.embeddings import integration_class_REPLACE_ME
@@ -49,7 +49,7 @@ from langchain.embeddings import integration_class_REPLACE_ME
## Chat Models
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)
See a [usage example](/docs/modules/model_io/models/chat/integrations/INCLUDE_REAL_NAME.html)
```python
from langchain.chat_models import integration_class_REPLACE_ME
@@ -57,7 +57,7 @@ from langchain.chat_models import integration_class_REPLACE_ME
## Document Loader
See a [usage example](/docs/integrations/document_loaders/INCLUDE_REAL_NAME).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/INCLUDE_REAL_NAME.html).
```python
from langchain.document_loaders import integration_class_REPLACE_ME

View File

@@ -1,6 +1,5 @@
# Tutorials
Below are links to video tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).
⛓ icon marks a new addition [last update 2023-07-05]

View File

@@ -4,7 +4,7 @@ If you're building with LLMs, at some point something will break, and you'll nee
Here's a few different tools and functionalities to aid in debugging.
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
## Tracing

View File

@@ -1,381 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2da95378",
"metadata": {},
"source": [
"# Pairwise String Comparison\n",
"\n",
"Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n",
"\n",
"- Which LLM or prompt produces a preferred output for a given question?\n",
"- Which examples should I include for few-shot example selection?\n",
"- Which output is better to include for fintetuning?\n",
"\n",
"The simplest and often most reliable automated way to choose a preferred prediction for a given input is to use the `pairwise_string` evaluator.\n",
"\n",
"Check out the reference docs for the [PairwiseStringEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain) for more info."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f6790c46",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"labeled_pairwise_string\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "49ad9139",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Both responses are relevant to the question asked, as they both provide a numerical answer to the question about the number of dogs in the park. However, Response A is incorrect according to the reference answer, which states that there are four dogs. Response B, on the other hand, is correct as it matches the reference answer. Neither response demonstrates depth of thought, as they both simply provide a numerical answer without any additional information or context. \\n\\nBased on these criteria, Response B is the better response.\\n',\n",
" 'value': 'B',\n",
" 'score': 0}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"there are three dogs\",\n",
" prediction_b=\"4\",\n",
" input=\"how many dogs are in the park?\",\n",
" reference=\"four\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "7491d2e6-4e77-4b17-be6b-7da966785c1d",
"metadata": {},
"source": [
"## Methods\n",
"\n",
"\n",
"The pairwise string evaluator can be called using [evaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.evaluate_string_pairs) (or async [aevaluate_string_pairs](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html#langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.aevaluate_string_pairs)) methods, which accept:\n",
"\n",
"- prediction (str) The predicted response of the first model, chain, or prompt.\n",
"- prediction_b (str) The predicted response of the second model, chain, or prompt.\n",
"- input (str) The input question, prompt, or other text.\n",
"- reference (str) (Only for the labeled_pairwise_string variant) The reference response.\n",
"\n",
"They return a dictionary with the following values:\n",
"- value: 'A' or 'B', indicating whether `prediction` or `prediction_b` is preferred, respectively\n",
"- score: Integer 0 or 1 mapped from the 'value', where a score of 1 would mean that the first `prediction` is preferred, and a score of 0 would mean `prediction_b` is preferred.\n",
"- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score"
]
},
{
"cell_type": "markdown",
"id": "ed353b93-be71-4479-b9c0-8c97814c2e58",
"metadata": {},
"source": [
"## Without References\n",
"\n",
"When references aren't available, you can still predict the preferred response.\n",
"The results will reflect the evaluation model's preference, which is less reliable and may result\n",
"in preferences that are factually incorrect."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "586320da",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"pairwise_string\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "7f56c76e-a39b-4509-8b8a-8a2afe6c3da1",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Both responses are correct and relevant to the question. However, Response B is more helpful and insightful as it provides a more detailed explanation of what addition is. Response A is correct but lacks depth as it does not explain what the operation of addition entails. \\n\\nFinal Decision: [[B]]',\n",
" 'value': 'B',\n",
" 'score': 0}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"Addition is a mathematical operation.\",\n",
" prediction_b=\"Addition is a mathematical operation that adds two numbers to create a third number, the 'sum'.\",\n",
" input=\"What is addition?\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "4a09b21d-9851-47e8-93d3-90044b2945b0",
"metadata": {
"tags": []
},
"source": [
"## Defining the Criteria\n",
"\n",
"By default, the LLM is instructed to select the 'preferred' response based on helpfulness, relevance, correctness, and depth of thought. You can customize the criteria by passing in a `criteria` argument, where the criteria could take any of the following forms:\n",
"- [`Criteria`](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.Criteria.html#langchain.evaluation.criteria.eval_chain.Criteria) enum or its string value - to use one of the default criteria and their descriptions\n",
"- [Constitutional principal](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html#langchain.chains.constitutional_ai.models.ConstitutionalPrinciple) - use one any of the constitutional principles defined in langchain\n",
"- Dictionary: a list of custom criteria, where the key is the name of the criteria, and the value is the description.\n",
"- A list of criteria or constitutional principles - to combine multiple criteria in one.\n",
"\n",
"Below is an example for determining preferred writing responses based on a custom style."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8539e7d9-f7b0-4d32-9c45-593a7915c093",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"custom_criteria = {\n",
" \"simplicity\": \"Is the language straightforward and unpretentious?\",\n",
" \"clarity\": \"Are the sentences clear and easy to understand?\",\n",
" \"precision\": \"Is the writing precise, with no unnecessary words or details?\",\n",
" \"truthfulness\": \"Does the writing feel honest and sincere?\",\n",
" \"subtext\": \"Does the writing suggest deeper meanings or themes?\",\n",
"}\n",
"evaluator = load_evaluator(\"pairwise_string\", criteria=custom_criteria)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fec7bde8-fbdc-4730-8366-9d90d033c181",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Response A is simple, clear, and precise. It uses straightforward language to convey a deep and sincere message about families. The metaphor of joy and sorrow as music is effective and easy to understand.\\n\\nResponse B, on the other hand, is more complex and less clear. The language is more pretentious, with words like \"domicile,\" \"resounds,\" \"abode,\" \"dissonant,\" and \"elegy.\" While it conveys a similar message to Response A, it does so in a more convoluted way. The precision is also lacking due to the use of unnecessary words and details.\\n\\nBoth responses suggest deeper meanings or themes about the shared joy and unique sorrow in families. However, Response A does so in a more effective and accessible way.\\n\\nTherefore, the better response is [[A]].',\n",
" 'value': 'A',\n",
" 'score': 1}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"Every cheerful household shares a similar rhythm of joy; but sorrow, in each household, plays a unique, haunting melody.\",\n",
" prediction_b=\"Where one finds a symphony of joy, every domicile of happiness resounds in harmonious,\"\n",
" \" identical notes; yet, every abode of despair conducts a dissonant orchestra, each\"\n",
" \" playing an elegy of grief that is peculiar and profound to its own existence.\",\n",
" input=\"Write some prose about families.\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "a25b60b2-627c-408a-be4b-a2e5cbc10726",
"metadata": {},
"source": [
"## Customize the LLM\n",
"\n",
"By default, the loader uses `gpt-4` in the evaluation chain. You can customize this when loading."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "de84a958-1330-482b-b950-68bcf23f9e35",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(temperature=0)\n",
"\n",
"evaluator = load_evaluator(\"labeled_pairwise_string\", llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "e162153f-d50a-4a7c-a033-019dabbc954c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Here is my assessment:\\n\\nResponse B is more helpful, insightful, and accurate than Response A. Response B simply states \"4\", which directly answers the question by providing the exact number of dogs mentioned in the reference answer. In contrast, Response A states \"there are three dogs\", which is incorrect according to the reference answer. \\n\\nIn terms of helpfulness, Response B gives the precise number while Response A provides an inaccurate guess. For relevance, both refer to dogs in the park from the question. However, Response B is more correct and factual based on the reference answer. Response A shows some attempt at reasoning but is ultimately incorrect. Response B requires less depth of thought to simply state the factual number.\\n\\nIn summary, Response B is superior in terms of helpfulness, relevance, correctness, and depth. My final decision is: [[B]]\\n',\n",
" 'value': 'B',\n",
" 'score': 0}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"there are three dogs\",\n",
" prediction_b=\"4\",\n",
" input=\"how many dogs are in the park?\",\n",
" reference=\"four\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e0e89c13-d0ad-4f87-8fcb-814399bafa2a",
"metadata": {},
"source": [
"## Customize the Evaluation Prompt\n",
"\n",
"You can use your own custom evaluation prompt to add more task-specific instructions or to instruct the evaluator to score the output.\n",
"\n",
"*Note: If you use a prompt that expects generates a result in a unique format, you may also have to pass in a custom output parser (`output_parser=your_parser()`) instead of the default `PairwiseStringResultOutputParser`"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "fb817efa-3a4d-439d-af8c-773b89d97ec9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"\n",
"prompt_template = PromptTemplate.from_template(\n",
" \"\"\"Given the input context, which do you prefer: A or B?\n",
"Evaluate based on the following criteria:\n",
"{criteria}\n",
"Reason step by step and finally, respond with either [[A]] or [[B]] on its own line.\n",
"\n",
"DATA\n",
"----\n",
"input: {input}\n",
"reference: {reference}\n",
"A: {prediction}\n",
"B: {prediction_b}\n",
"---\n",
"Reasoning:\n",
"\n",
"\"\"\"\n",
")\n",
"evaluator = load_evaluator(\n",
" \"labeled_pairwise_string\", prompt=prompt_template\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d40aa4f0-cfd5-4cb4-83c8-8d2300a04c2f",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input_variables=['prediction', 'reference', 'prediction_b', 'input'] output_parser=None partial_variables={'criteria': 'helpfulness: Is the submission helpful, insightful, and appropriate?\\nrelevance: Is the submission referring to a real quote from the text?\\ncorrectness: Is the submission correct, accurate, and factual?\\ndepth: Does the submission demonstrate depth of thought?'} template='Given the input context, which do you prefer: A or B?\\nEvaluate based on the following criteria:\\n{criteria}\\nReason step by step and finally, respond with either [[A]] or [[B]] on its own line.\\n\\nDATA\\n----\\ninput: {input}\\nreference: {reference}\\nA: {prediction}\\nB: {prediction_b}\\n---\\nReasoning:\\n\\n' template_format='f-string' validate_template=True\n"
]
}
],
"source": [
"# The prompt was assigned to the evaluator\n",
"print(evaluator.prompt)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "9467bb42-7a31-4071-8f66-9ed2c6f06dcd",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'reasoning': 'Helpfulness: Both A and B are helpful as they provide a direct answer to the question.\\nRelevance: A is relevant as it refers to the correct name of the dog from the text. B is not relevant as it provides a different name.\\nCorrectness: A is correct as it accurately states the name of the dog. B is incorrect as it provides a different name.\\nDepth: Both A and B demonstrate a similar level of depth as they both provide a straightforward answer to the question.\\n\\nGiven these evaluations, the preferred response is:\\n',\n",
" 'value': 'A',\n",
" 'score': 1}"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_string_pairs(\n",
" prediction=\"The dog that ate the ice cream was named fido.\",\n",
" prediction_b=\"The dog's name is spot\",\n",
" input=\"What is the name of the dog that ate the ice cream?\",\n",
" reference=\"The dog's name is fido\",\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,318 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "bce7335e-f3b2-44f3-90cc-8c0a23a89a21",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from langchain.agents import load_tools\n",
"from langchain.agents import initialize_agent\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.utilities import GoogleSearchAPIWrapper\n",
"from langchain.schema import (\n",
" SystemMessage,\n",
" HumanMessage,\n",
" AIMessage\n",
")\n",
"\n",
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_ENDPOINT\"] = \"https://api.smith.langchain.com\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = \"******\"\n",
"# os.environ[\"LANGCHAIN_PROJECT\"] = \"Jarvis\"\n",
"\n",
"\n",
"prefix_messages = [{\"role\": \"system\", \"content\": \"You are a helpful discord Chatbot.\"}]\n",
"\n",
"llm = ChatOpenAI(model_name='gpt-3.5-turbo', \n",
" temperature=0.5, \n",
" max_tokens = 2000)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)\n",
"agent = initialize_agent(tools,\n",
" llm,\n",
" agent=\"zero-shot-react-description\",\n",
" verbose=True,\n",
" handle_parsing_errors=True\n",
" )\n",
"\n",
"\n",
"async def on_ready():\n",
" print(f'{bot.user} has connected to Discord!')\n",
"\n",
"async def on_message(message):\n",
"\n",
" print(\"Detected bot name in message:\", message.content)\n",
"\n",
" # Capture the output of agent.run() in the response variable\n",
" response = agent.run(message.content)\n",
"\n",
" while response:\n",
" print(response)\n",
" chunk, response = response[:2000], response[2000:]\n",
" print(f\"Chunk: {chunk}\")\n",
" print(\"Response sent.\")\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "1551ce9f-b6de-4035-b6d6-825722823b48",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from dataclasses import dataclass\n",
"@dataclass\n",
"class Message:\n",
" content: str"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "6e6859ec-8544-4407-9663-6b53c0092903",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Detected bot name in message: Hi AI, how are you today?\n",
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThis question is not something that can be answered using the available tools.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3mI need to follow the correct format for answering questions.\n",
"Action: N/A\u001b[0m\n",
"Observation: Invalid Format: Missing 'Action Input:' after 'Action:'\n",
"Thought:\u001b[32;1m\u001b[1;3m\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Agent stopped due to iteration limit or time limit.\n",
"Chunk: Agent stopped due to iteration limit or time limit.\n",
"Response sent.\n"
]
}
],
"source": [
"await on_message(Message(content=\"Hi AI, how are you today?\"))"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "b850294c-7f8f-4e79-adcf-47e4e3a898df",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langsmith import Client\n",
"\n",
"client = Client()"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "6d089ddc-69bc-45a8-b8db-9962e4f1f5ee",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from itertools import islice\n",
"\n",
"runs = list(islice(client.list_runs(), 10))"
]
},
{
"cell_type": "code",
"execution_count": 38,
"id": "f0349fac-5a98-400f-ba03-61ed4e1332be",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"runs = sorted(runs, key=lambda x: x.start_time, reverse=True)"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "02f133f0-39ee-4b46-b443-12c1f9b76fff",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"ids = [run.id for run in runs]"
]
},
{
"cell_type": "code",
"execution_count": 39,
"id": "3366dce4-0c38-4a7d-8111-046a58b24917",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"runs2 = list(client.list_runs(id=ids))\n",
"runs2 = sorted(runs2, key=lambda x: x.start_time, reverse=True)"
]
},
{
"cell_type": "code",
"execution_count": 42,
"id": "82915b90-39a0-47d6-9121-56a13f210f52",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"['a36092d2-4ad5-4fb4-9b0d-0dba9a2ed836',\n",
" '9398e6be-964f-4aa4-8de9-ad78cd4b7074']"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"[str(x) for x in ids[:2]]"
]
},
{
"cell_type": "code",
"execution_count": 48,
"id": "f610ec91-dc48-4a17-91c5-5c4675c77abc",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langsmith.run_helpers import traceable\n",
"\n",
"@traceable(run_type=\"llm\", name=\"\"\"<iframe width=\"560\" height=\"315\" src=\"https://www.youtube.com/embed/dQw4w9WgXcQ?start=5\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" allowfullscreen></iframe>\"\"\")\n",
"def foo():\n",
" return \"bar\""
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "bd317bd7-8b2a-433a-8ec3-098a84ba8e64",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"'bar'"
]
},
"execution_count": 49,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"foo()"
]
},
{
"cell_type": "code",
"execution_count": 52,
"id": "b142519b-6885-415c-83b9-4a346fb90589",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.llms import AzureOpenAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5c50bb2b-72b8-4322-9b16-d857ecd9f347",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,269 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9a9acd2e",
"metadata": {},
"source": [
"# Interface\n",
"\n",
"In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.Runnable.html#langchain.schema.runnable.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
"\n",
"- `stream`: stream back chunks of the response\n",
"- `invoke`: call the chain on an input\n",
"- `batch`: call the chain on a list of inputs\n",
"\n",
"These also have corresponding async methods:\n",
"\n",
"- `astream`: stream back chunks of the response async\n",
"- `ainvoke`: call the chain on an input async\n",
"- `abatch`: call the chain on a list of inputs async\n",
"\n",
"The type of the input varies by component. For a prompt it is a dictionary, for a retriever it is a single string, for a model either a single string, a list of chat messages, or a PromptValue.\n",
"\n",
"The output type also varies by component. For an LLM it is a string, for a ChatModel it's a ChatMessage, for a prompt it's a PromptValue, for a retriever it's a list of documents.\n",
"\n",
"Let's take a look at these methods! To do so, we'll create a super simple PromptTemplate + ChatModel chain."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "466b65b3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.chat_models import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3c634ef0",
"metadata": {},
"outputs": [],
"source": [
"model = ChatOpenAI()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d1850a1f",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "56d0669f",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "markdown",
"id": "daf2b2b2",
"metadata": {},
"source": [
"## Stream"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "bea9639d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, here's a bear-themed joke for you:\n",
"\n",
"Why don't bears wear shoes?\n",
"\n",
"Because they have bear feet!"
]
}
],
"source": [
"for s in chain.stream({\"topic\": \"bears\"}):\n",
" print(s.content, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "cbf1c782",
"metadata": {},
"source": [
"## Invoke"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "470e483f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they already have bear feet!\", additional_kwargs={}, example=False)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"topic\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "88f0c279",
"metadata": {},
"source": [
"## Batch"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "9685de67",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content=\"Why don't bears ever wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
" AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\", additional_kwargs={}, example=False)]"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])"
]
},
{
"cell_type": "markdown",
"id": "b960cbfe",
"metadata": {},
"source": [
"## Async Stream"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "ea35eee4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Why don't bears wear shoes?\n",
"\n",
"Because they have bear feet!"
]
}
],
"source": [
"async for s in chain.astream({\"topic\": \"bears\"}):\n",
" print(s.content, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "04cb3324",
"metadata": {},
"source": [
"## Async Invoke"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "ef8c9b20",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Sure, here you go:\\n\\nWhy don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chain.ainvoke({\"topic\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "3da288d5",
"metadata": {},
"source": [
"## Async Batch"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "eba2a103",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chain.abatch([{\"topic\": \"bears\"}])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -5,7 +5,7 @@
"id": "920a3c1a",
"metadata": {},
"source": [
"# Model comparison\n",
"# Model Comparison\n",
"\n",
"Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way. \n",
"\n",
@@ -254,7 +254,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.9"
}
},
"nbformat": 4,

View File

@@ -14,7 +14,7 @@
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
"> from data labeling to model monitoring.\n",
"\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla.html\">\n",
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/modules/callbacks/integrations/argilla.html\">\n",
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
"</a>"
]

View File

@@ -11,7 +11,7 @@
"\n",
"[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
"\n",
"While PromptLayer does have LLMs that integrate directly with LangChain (eg [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"While PromptLayer does have LLMs that integrate directly with LangChain (eg [`PromptLayerOpenAI`](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"\n",
"See [our docs](https://docs.promptlayer.com/languages/langchain) for more information."
]

View File

@@ -1,287 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5125a1e3",
"metadata": {},
"source": [
"# Anthropic Functions\n",
"\n",
"This notebook shows how to use an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "378be79b",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.14) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
" warnings.warn(\n"
]
}
],
"source": [
"from langchain_experimental.llms.anthropic_functions import AnthropicFunctions"
]
},
{
"cell_type": "markdown",
"id": "65499965",
"metadata": {},
"source": [
"## Initialize Model\n",
"\n",
"You can initialize this wrapper the same way you'd initialize ChatAnthropic"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e1d535f6",
"metadata": {},
"outputs": [],
"source": [
"model = AnthropicFunctions(model='claude-2')"
]
},
{
"cell_type": "markdown",
"id": "fcc9eaf4",
"metadata": {},
"source": [
"## Passing in functions\n",
"\n",
"You can now pass in functions in a similar way"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "0779c320",
"metadata": {},
"outputs": [],
"source": [
"functions=[\n",
" {\n",
" \"name\": \"get_current_weather\",\n",
" \"description\": \"Get the current weather in a given location\",\n",
" \"parameters\": {\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"location\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The city and state, e.g. San Francisco, CA\"\n",
" },\n",
" \"unit\": {\n",
" \"type\": \"string\",\n",
" \"enum\": [\"celsius\", \"fahrenheit\"]\n",
" }\n",
" },\n",
" \"required\": [\"location\"]\n",
" }\n",
" }\n",
" ]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ad75a933",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "fc703085",
"metadata": {},
"outputs": [],
"source": [
"response = model.predict_messages(\n",
" [HumanMessage(content=\"whats the weater in boston?\")], \n",
" functions=functions\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "04d7936a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' ', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{\"location\": \"Boston, MA\", \"unit\": \"fahrenheit\"}'}}, example=False)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response"
]
},
{
"cell_type": "markdown",
"id": "0072fdba",
"metadata": {},
"source": [
"## Using for extraction\n",
"\n",
"You can now use this for extraction."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7af5c567",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import create_extraction_chain\n",
"schema = {\n",
" \"properties\": {\n",
" \"name\": {\"type\": \"string\"},\n",
" \"height\": {\"type\": \"integer\"},\n",
" \"hair_color\": {\"type\": \"string\"},\n",
" },\n",
" \"required\": [\"name\", \"height\"],\n",
"}\n",
"inp = \"\"\"\n",
"Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n",
" \"\"\""
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "bd01082a",
"metadata": {},
"outputs": [],
"source": [
"chain = create_extraction_chain(schema, model)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b5a23e9f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'Alex', 'height': '5', 'hair_color': 'blonde'},\n",
" {'name': 'Claudia', 'height': '6', 'hair_color': 'brunette'}]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(inp)"
]
},
{
"cell_type": "markdown",
"id": "90ec959e",
"metadata": {},
"source": [
"## Using for tagging\n",
"\n",
"You can now use this for tagging"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "03c1eb0d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import create_tagging_chain"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "581c0ece",
"metadata": {},
"outputs": [],
"source": [
"schema = {\n",
" \"properties\": {\n",
" \"sentiment\": {\"type\": \"string\"},\n",
" \"aggressiveness\": {\"type\": \"integer\"},\n",
" \"language\": {\"type\": \"string\"},\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "d9a8570e",
"metadata": {},
"outputs": [],
"source": [
"chain = create_tagging_chain(schema, model)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "cf37d679",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'sentiment': 'positive', 'aggressiveness': '0', 'language': 'english'}"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.run(\"this is really cool\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,95 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AzureML Chat Online Endpoint\n",
"\n",
"[AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
"\n",
"This notebook goes over how to use a chat model hosted on an `AzureML online endpoint`"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up\n",
"\n",
"To use the wrapper, you must [deploy a model on AzureML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) and obtain the following parameters:\n",
"\n",
"* `endpoint_api_key`: The API key provided by the endpoint\n",
"* `endpoint_url`: The REST endpoint url provided by the endpoint"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Content Formatter\n",
"\n",
"The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. The following content formatters are provided:\n",
"\n",
"* `LLamaContentFormatter`: Formats request and response data for LLaMa2-chat"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' The Collatz Conjecture is one of the most famous unsolved problems in mathematics, and it has been the subject of much study and research for many years. While it is impossible to predict with certainty whether the conjecture will ever be solved, there are several reasons why it is considered a challenging and important problem:\\n\\n1. Simple yet elusive: The Collatz Conjecture is a deceptively simple statement that has proven to be extraordinarily difficult to prove or disprove. Despite its simplicity, the conjecture has eluded some of the brightest minds in mathematics, and it remains one of the most famous open problems in the field.\\n2. Wide-ranging implications: The Collatz Conjecture has far-reaching implications for many areas of mathematics, including number theory, algebra, and analysis. A solution to the conjecture could have significant impacts on these fields and potentially lead to new insights and discoveries.\\n3. Computational evidence: While the conjecture remains unproven, extensive computational evidence supports its validity. In fact, no counterexample to the conjecture has been found for any starting value up to 2^64 (a number', additional_kwargs={}, example=False)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models.azureml_endpoint import LlamaContentFormatter\n",
"from langchain.schema import HumanMessage\n",
"\n",
"chat = AzureMLChatOnlineEndpoint(content_formatter=LlamaContentFormatter())\n",
"response = chat(messages=[\n",
" HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")\n",
"])\n",
"response"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,7 +1,6 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -9,7 +8,11 @@
"\n",
"Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
"\n",
"By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n",
"PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the [GCP Service Specific Terms](https://cloud.google.com/terms/service-terms). \n",
"\n",
"Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the [launch stage descriptions](https://cloud.google.com/products#product-launch-stages). Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview [terms and conditions](https://cloud.google.com/trustedtester/aitos) (Preview Terms).\n",
"\n",
"For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms).\n",
"\n",
"To use Vertex AI PaLM you must have the `google-cloud-aiplatform` Python package installed and either:\n",
"- Have credentials configured for your environment (gcloud, workload identity, etc...)\n",
@@ -87,7 +90,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -140,7 +142,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {
"execution": {

View File

@@ -1,220 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1ab83660",
"metadata": {},
"source": [
"# Etherscan Loader\n",
"## Overview\n",
"\n",
"The Etherscan loader use etherscan api to load transacactions histories under specific account on Ethereum Mainnet.\n",
"\n",
"You will need a Etherscan api key to proceed. The free api key has 5 calls per seconds quota.\n",
"\n",
"The loader supports the following six functinalities:\n",
"* Retrieve normal transactions under specific account on Ethereum Mainet\n",
"* Retrieve internal transactions under specific account on Ethereum Mainet\n",
"* Retrieve erc20 transactions under specific account on Ethereum Mainet\n",
"* Retrieve erc721 transactions under specific account on Ethereum Mainet\n",
"* Retrieve erc1155 transactions under specific account on Ethereum Mainet\n",
"* Retrieve ethereum balance in wei under specific account on Ethereum Mainet\n",
"\n",
"\n",
"If the account does not have corresponding transactions, the loader will a list with one document. The content of document is ''.\n",
"\n",
"You can pass differnt filters to loader to access different functionalities we mentioned above:\n",
"* \"normal_transaction\"\n",
"* \"internal_transaction\"\n",
"* \"erc20_transaction\"\n",
"* \"eth_balance\"\n",
"* \"erc721_transaction\"\n",
"* \"erc1155_transaction\"\n",
"The filter is default to normal_transaction\n",
"\n",
"If you have any questions, you can access [Etherscan API Doc](https://etherscan.io/tx/0x0ffa32c787b1398f44303f731cb06678e086e4f82ce07cebf75e99bb7c079c77) or contact me via i@inevitable.tech.\n",
"\n",
"All functions related to transactions histories are restricted 1000 histories maximum because of Etherscan limit. You can use the following parameters to find the transaction histories you need:\n",
"* offset: default to 20. Shows 20 transactions for one time\n",
"* page: default to 1. This controls pagenation.\n",
"* start_block: Default to 0. The transaction histories starts from 0 block.\n",
"* end_block: Default to 99999999. The transaction histories starts from 99999999 block\n",
"* sort: \"desc\" or \"asc\". Set default to \"desc\" to get latest transactions."
]
},
{
"cell_type": "markdown",
"id": "d72d4e22",
"metadata": {},
"source": [
"# Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2911e51e",
"metadata": {},
"outputs": [],
"source": [
"%pip install langchain -q"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "208e2fbf",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import EtherscanLoader\n",
"import os"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "5d24b650",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"ETHERSCAN_API_KEY\"] = etherscanAPIKey"
]
},
{
"cell_type": "markdown",
"id": "3bcbb63e",
"metadata": {},
"source": [
"# Create a ERC20 transaction loader"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "d525e6c8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'blockNumber': '13242975',\n",
" 'timeStamp': '1631878751',\n",
" 'hash': '0x366dda325b1a6570928873665b6b418874a7dedf7fee9426158fa3536b621788',\n",
" 'nonce': '28',\n",
" 'blockHash': '0x5469dba1b1e1372962cf2be27ab2640701f88c00640c4d26b8cc2ae9ac256fb6',\n",
" 'from': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3',\n",
" 'contractAddress': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3',\n",
" 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b',\n",
" 'value': '298131000000000',\n",
" 'tokenName': 'ABCHANGE.io',\n",
" 'tokenSymbol': 'XCH',\n",
" 'tokenDecimal': '9',\n",
" 'transactionIndex': '71',\n",
" 'gas': '15000000',\n",
" 'gasPrice': '48614996176',\n",
" 'gasUsed': '5712724',\n",
" 'cumulativeGasUsed': '11507920',\n",
" 'input': 'deprecated',\n",
" 'confirmations': '4492277'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"account_address = \"0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b\"\n",
"loader = EtherscanLoader(account_address, filter=\"erc20_transaction\")\n",
"result = loader.load()\n",
"eval(result[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "2a1ecce0",
"metadata": {},
"source": [
"# Create a normal transaction loader with customized parameters"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "07aa2b6c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"20\n"
]
},
{
"data": {
"text/plain": [
"[Document(page_content=\"{'blockNumber': '1723771', 'timeStamp': '1466213371', 'hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'nonce': '3155', 'blockHash': '0xc2c2207bcaf341eed07f984c9a90b3f8e8bdbdbd2ac6562f8c2f5bfa4b51299d', 'transactionIndex': '5', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13149213761000000000', 'gas': '90000', 'gasPrice': '22655598156', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '126000', 'gasUsed': '21000', 'confirmations': '16011481', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1727090', 'timeStamp': '1466262018', 'hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'nonce': '3267', 'blockHash': '0xc0cff378c3446b9b22d217c2c5f54b1c85b89a632c69c55b76cdffe88d2b9f4d', 'transactionIndex': '20', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11521979886000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3806725', 'gasUsed': '21000', 'confirmations': '16008162', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1730337', 'timeStamp': '1466308222', 'hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'nonce': '3344', 'blockHash': '0x3a52d28b8587d55c621144a161a0ad5c37dd9f7d63b629ab31da04fa410b2cfa', 'transactionIndex': '1', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9783400526000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '60788', 'gasUsed': '21000', 'confirmations': '16004915', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1733479', 'timeStamp': '1466352351', 'hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'nonce': '3367', 'blockHash': '0x9928661e7ae125b3ae0bcf5e076555a3ee44c52ae31bd6864c9c93a6ebb3f43e', 'transactionIndex': '0', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '1570706444000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '16001773', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1734172', 'timeStamp': '1466362463', 'hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'nonce': '1016', 'blockHash': '0x8a8afe2b446713db88218553cfb5dd202422928e5e0bc00475ed2f37d95649de', 'transactionIndex': '4', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '6322276709000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '105333', 'gasUsed': '21000', 'confirmations': '16001080', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1737276', 'timeStamp': '1466406037', 'hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'nonce': '1024', 'blockHash': '0xe117cad73752bb485c3bef24556e45b7766b283229180fcabc9711f3524b9f79', 'transactionIndex': '35', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9976891868000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3187163', 'gasUsed': '21000', 'confirmations': '15997976', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1740314', 'timeStamp': '1466450262', 'hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'nonce': '1051', 'blockHash': '0x588d17842819a81afae3ac6644d8005c12ce55ddb66c8d4c202caa91d4e8fdbe', 'transactionIndex': '6', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8060633765000000000', 'gas': '90000', 'gasPrice': '22926905859', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '153077', 'gasUsed': '21000', 'confirmations': '15994938', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1743384', 'timeStamp': '1466494099', 'hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'nonce': '1068', 'blockHash': '0x997245108c84250057fda27306b53f9438ad40978a95ca51d8fd7477e73fbaa7', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9541921352000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '119650', 'gasUsed': '21000', 'confirmations': '15991868', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1746405', 'timeStamp': '1466538123', 'hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'nonce': '1092', 'blockHash': '0x3af3966cdaf22e8b112792ee2e0edd21ceb5a0e7bf9d8c168a40cf22deb3690c', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8433783799000000000', 'gas': '90000', 'gasPrice': '25689279306', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15988847', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1749459', 'timeStamp': '1466582044', 'hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'nonce': '1096', 'blockHash': '0x5fc5d2a903977b35ce1239975ae23f9157d45d7bd8a8f6205e8ce270000797f9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10269065805000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15985793', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1752614', 'timeStamp': '1466626168', 'hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'nonce': '1118', 'blockHash': '0x88ef054b98e47504332609394e15c0a4467f84042396717af6483f0bcd916127', 'transactionIndex': '11', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11325836780000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '252000', 'gasUsed': '21000', 'confirmations': '15982638', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1755659', 'timeStamp': '1466669931', 'hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'nonce': '1133', 'blockHash': '0x2983972217a91343860415d1744c2a55246a297c4810908bbd3184785bc9b0c2', 'transactionIndex': '14', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13226475343000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '2674679', 'gasUsed': '21000', 'confirmations': '15979593', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1758709', 'timeStamp': '1466713652', 'hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'nonce': '1147', 'blockHash': '0x1660de1e73067251be0109d267a21ffc7d5bde21719a3664c7045c32e771ecf9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9758447294000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15976543', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1761783', 'timeStamp': '1466757809', 'hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'nonce': '1169', 'blockHash': '0x7576961afa4218a3264addd37a41f55c444dd534e9410dbd6f93f7fe20e0363e', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10197126683000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15973469', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1764895', 'timeStamp': '1466801683', 'hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'nonce': '1186', 'blockHash': '0x2e687643becd3c36e0c396a02af0842775e17ccefa0904de5aeca0a9a1aa795e', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8690241462000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15970357', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1767936', 'timeStamp': '1466845682', 'hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'nonce': '1211', 'blockHash': '0xb01d8fd47b3554a99352ac3e5baf5524f314cfbc4262afcfbea1467b2d682898', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11914401843000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15967316', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1770911', 'timeStamp': '1466888890', 'hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'nonce': '1212', 'blockHash': '0x79a9de39276132dab8bf00dc3e060f0e8a14f5e16a0ee4e9cc491da31b25fe58', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10918214730000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15964341', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1774044', 'timeStamp': '1466932983', 'hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'nonce': '1240', 'blockHash': '0x69cee390378c3b886f9543fb3a1cb2fc97621ec155f7884564d4c866348ce539', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9979637283000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15961208', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1777057', 'timeStamp': '1466976422', 'hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'nonce': '1248', 'blockHash': '0xc7cacda0ac38c99f1b9bccbeee1562a41781d2cfaa357e8c7b4af6a49584b968', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '4556173496000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15958195', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1780120', 'timeStamp': '1467020353', 'hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'nonce': '1266', 'blockHash': '0xfc0e066e5b613239e1a01e6d582e7ab162ceb3ca4f719dfbd1a0c965adcfe1c5', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11890330240000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15955132', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'})]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = EtherscanLoader(\n",
" account_address,\n",
" page=2,\n",
" offset=20,\n",
" start_block=10000,\n",
" end_block=8888888888,\n",
" sort=\"asc\",\n",
")\n",
"result = loader.load()\n",
"result"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -13,7 +13,7 @@
"\n",
"## Prerequisites\n",
"\n",
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify.html) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/modules/agents/tools/integrations/apify.html) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
]
},
{

View File

@@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# ChatGPT Data\n",
"### ChatGPT Data\n",
"\n",
">[ChatGPT](https://chat.openai.com) is an artificial intelligence (AI) chatbot developed by OpenAI.\n",
"\n",

View File

@@ -1,94 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "23c6e167",
"metadata": {},
"source": [
"# Concurrent Loader\n",
"\n",
"Works just like the GenericLoader but concurrently for those who choose to optimize their workflow.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6ff3fb1f",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import ConcurrentLoader"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "ce96fa20",
"metadata": {},
"outputs": [],
"source": [
"loader = ConcurrentLoader.from_filesystem('example_data/', glob=\"**/*.txt\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "06a6cf5d",
"metadata": {},
"outputs": [],
"source": [
"files = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "b87d3e58",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"2"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(files)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "668f1ee5",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -53,23 +53,11 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**Input arguments (mandatory)**\n",
"\n",
"`Cube Semantic Loader` requires 2 arguments:\n",
"\n",
"- `cube_api_url`: The URL of your Cube's deployment REST API. Please refer to the [Cube documentation](https://cube.dev/docs/http-api/rest#configuration-base-path) for more information on configuring the base path.\n",
"\n",
"- `cube_api_token`: The authentication token generated based on your Cube's API secret. Please refer to the [Cube documentation](https://cube.dev/docs/security#generating-json-web-tokens-jwt) for instructions on generating JSON Web Tokens (JWT).\n",
"\n",
"**Input arguments (optional)**\n",
"\n",
"- `load_dimension_values`: Whether to load dimension values for every string dimension or not.\n",
"\n",
"- `dimension_values_limit`: Maximum number of dimension values to load.\n",
"\n",
"- `dimension_values_max_retries`: Maximum number of retries to load dimension values.\n",
"\n",
"- `dimension_values_retry_delay`: Delay between retries to load dimension values."
"| Input Parameter | Description |\n",
"| --- | --- |\n",
"| `cube_api_url` | The URL of your Cube's deployment REST API. Please refer to the [Cube documentation](https://cube.dev/docs/http-api/rest#configuration-base-path) for more information on configuring the base path. |\n",
"| `cube_api_token` | The authentication token generated based on your Cube's API secret. Please refer to the [Cube documentation](https://cube.dev/docs/security#generating-json-web-tokens-jwt) for instructions on generating JSON Web Tokens (JWT). |\n"
]
},
{
@@ -97,7 +85,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Returns a list of documents with the following attributes:\n",
"Returns:\n",
"\n",
"A list of documents with the following attributes:\n",
"\n",
"- `page_content`\n",
"- `metadata`\n",
@@ -105,8 +95,7 @@
" - `column_name`\n",
" - `column_data_type`\n",
" - `column_title`\n",
" - `column_description`\n",
" - `column_values`"
" - `column_description`"
]
},
{
@@ -114,7 +103,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"> page_content='Users View City, None' metadata={'table_name': 'users_view', 'column_name': 'users_view.city', 'column_data_type': 'string', 'column_title': 'Users View City', 'column_description': 'None', 'column_member_type': 'dimension', 'column_values': ['Austin', 'Chicago', 'Los Angeles', 'Mountain View', 'New York', 'Palo Alto', 'San Francisco', 'Seattle']}"
"> page_content='table name: orders_view, column name: orders_view.total_amount, column data type: number, column title: Orders View Total Amount, column description: None' metadata={'table_name': 'orders_view', 'column_name': 'orders_view.total_amount', 'column_data_type': 'number', 'column_title': 'Orders View Total Amount', 'column_description': 'None'}"
]
}
],

File diff suppressed because one or more lines are too long

View File

@@ -2,7 +2,7 @@
This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
```python

View File

@@ -46,7 +46,7 @@
"id": "04981332",
"metadata": {},
"source": [
"Create a GeoPandas dataframe from [`Open City Data`](https://python.langchain.com/docs/integrations/document_loaders/open_city_data) as an example input."
"Create a GeoPandas dataframe from [`Open City Data`](https://python.langchain.com/docs/modules/data_connection/document_loaders/integrations/open_city_data) as an example input."
]
},
{

View File

@@ -1,178 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c83b6a4c",
"metadata": {},
"source": [
"# Huawei OBS Directory\n",
"The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c2191935",
"metadata": {},
"outputs": [],
"source": [
"# Install the required package\n",
"# pip install esdk-obs-python"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "55fca3b4",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import OBSDirectoryLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c3ed419f",
"metadata": {},
"outputs": [],
"source": [
"endpoint = \"your-endpoint\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "3428fd4e",
"metadata": {},
"outputs": [],
"source": [
"# Configure your access credentials\\n\n",
"config = {\n",
" \"ak\": \"your-access-key\",\n",
" \"sk\": \"your-secret-key\"\n",
"}\n",
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9beede9f",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "1e20a839",
"metadata": {},
"source": [
"## Specify a Prefix for Loading\n",
"If you want to load objects with a specific prefix from the bucket, you can use the following code:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "125f311d",
"metadata": {},
"outputs": [],
"source": [
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint, config=config, prefix=\"test_prefix\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b3488037",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "84c82c0a",
"metadata": {},
"source": [
"## Get Authentication Information from ECS\n",
"If your langchain is deployed on Huawei Cloud ECS and [Agency is set up](https://support.huaweicloud.com/intl/en-us/usermanual-ecs/ecs_03_0166.html#section7), the loader can directly get the security token from ECS without needing access key and secret key. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "1db99969",
"metadata": {},
"outputs": [],
"source": [
"config = {\"get_token_from_ecs\": True}\n",
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "57dd9f35",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "30205d25",
"metadata": {},
"source": [
"## Use a Public Bucket\n",
"If your bucket's bucket policy allows anonymous access (anonymous users have `listBucket` and `GetObject` permissions), you can directly load the objects without configuring the `config` parameter."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4dfa2ef0",
"metadata": {},
"outputs": [],
"source": [
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "67d4c1d0",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,180 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4394a872",
"metadata": {},
"source": [
"# Huawei OBS File\n",
"The following code demonstrates how to load an object from the Huawei OBS (Object Storage Service) as document."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c43d811b",
"metadata": {},
"outputs": [],
"source": [
"# Install the required package\n",
"# pip install esdk-obs-python"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5e16bae6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders.obs_file import OBSFileLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "75cc7e7c",
"metadata": {},
"outputs": [],
"source": [
"endpoint = \"your-endpoint\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f9816984",
"metadata": {},
"outputs": [],
"source": [
"from obs import ObsClient\n",
"obs_client = ObsClient(access_key_id=\"your-access-key\", secret_access_key=\"your-secret-key\", server=endpoint)\n",
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\", client=obs_client)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6143b39b",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "633e05ca",
"metadata": {},
"source": [
"## Each Loader with Separate Authentication Information\n",
"If you don't need to reuse OBS connections between different loaders, you can directly configure the `config`. The loader will use the config information to initialize its own OBS client."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a5dd6a5d",
"metadata": {},
"outputs": [],
"source": [
"# Configure your access credentials\\n\n",
"config = {\n",
" \"ak\": \"your-access-key\",\n",
" \"sk\": \"your-secret-key\"\n",
"}\n",
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\",endpoint=endpoint, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a741f1c",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "1e2e611c",
"metadata": {},
"source": [
"## Get Authentication Information from ECS\n",
"If your langchain is deployed on Huawei Cloud ECS and [Agency is set up](https://support.huaweicloud.com/intl/en-us/usermanual-ecs/ecs_03_0166.html#section7), the loader can directly get the security token from ECS without needing access key and secret key. "
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "338fafef",
"metadata": {},
"outputs": [],
"source": [
"config = {\"get_token_from_ecs\": True}\n",
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\", endpoint=endpoint, config=config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "73976c55",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
},
{
"cell_type": "markdown",
"id": "b77aa18c",
"metadata": {},
"source": [
"## Access a Publicly Accessible Object\n",
"If the object you want to access allows anonymous user access (anonymous users have `GetObject` permission), you can directly load the object without configuring the `config` parameter."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "df83d121",
"metadata": {},
"outputs": [],
"source": [
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\", endpoint=endpoint)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82a844ba",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -57,13 +57,7 @@
}
],
"source": [
"loader = MWDumpLoader(\n",
" file_path = \"example_data/testmw_pages_current.xml\", \n",
" encoding=\"utf8\",\n",
" #namespaces = [0,2,3] Optional list to load only specific namespaces. Loads all namespaces by default.\n",
" skip_redirects = True, #will skip over pages that just redirect to other pages (or not if False)\n",
" stop_on_error = False #will skip over pages that cause parsing errors (or not if False)\n",
" )\n",
"loader = MWDumpLoader(\"example_data/testmw_pages_current.xml\", encoding=\"utf8\")\n",
"documents = loader.load()\n",
"print(f\"You have {len(documents)} document(s) in your data \")"
]

View File

@@ -113,7 +113,7 @@
"\n",
"The modules are (from least to most complex):\n",
"\n",
"- [Models](https://python.langchain.com/docs/modules/model_io/models/): Supported model types and integrations.\n",
"- [Models](https://python.langchain.com/en/latest/modules/models.html): Supported model types and integrations.\n",
"\n",
"- [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization.\n",
"\n",

View File

@@ -44,7 +44,7 @@
},
"outputs": [],
"source": [
"#!pip install py-trello beautifulsoup4 lxml"
"#!pip install py-trello beautifulsoup4"
]
},
{

View File

@@ -18,7 +18,8 @@
"outputs": [],
"source": [
"# # Install package\n",
"!pip install \"unstructured[all-docs]\"\n"
"!pip install \"unstructured[local-inference]\"\n",
"!pip install layoutparser[layoutmodels,tesseract]"
]
},
{

View File

@@ -1,7 +1,6 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "e48afb8d",
"metadata": {},
@@ -12,8 +11,7 @@
"\n",
"Below we show how to easily go from a YouTube url to text to chat!\n",
"\n",
"We wil use the `OpenAIWhisperParser`, which will use the OpenAI Whisper API to transcribe audio to text, \n",
"and the `OpenAIWhisperParserLocal` for local support and running on private clouds or on premise.\n",
"We wil use the `OpenAIWhisperParser`, which will use the OpenAI Whisper API to transcribe audio to text.\n",
"\n",
"Note: You will need to have an `OPENAI_API_KEY` supplied."
]
@@ -26,7 +24,7 @@
"outputs": [],
"source": [
"from langchain.document_loaders.generic import GenericLoader\n",
"from langchain.document_loaders.parsers import OpenAIWhisperParser, OpenAIWhisperParserLocal\n",
"from langchain.document_loaders.parsers import OpenAIWhisperParser\n",
"from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader"
]
},
@@ -48,8 +46,7 @@
"outputs": [],
"source": [
"! pip install yt_dlp\n",
"! pip install pydub\n",
"! pip install librosa"
"! pip install pydub"
]
},
{
@@ -66,18 +63,6 @@
"Let's take the first lecture of Andrej Karpathy's YouTube course as an example! "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8682f256",
"metadata": {},
"outputs": [],
"source": [
"# set a flag to switch between local and remote parsing\n",
"# change this to True if you want to use local parsing\n",
"local = False"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -117,10 +102,7 @@
"save_dir = \"~/Downloads/YouTube\"\n",
"\n",
"# Transcribe the videos to text\n",
"if local:\n",
" loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParserLocal())\n",
"else:\n",
" loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())\n",
"loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())\n",
"docs = loader.load()"
]
},
@@ -293,7 +275,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -307,12 +289,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
},
"vscode": {
"interpreter": {
"hash": "97cc609b13305c559618ec78a438abc56230b9381f827f22d070313b9a1f3777"
}
"version": "3.9.16"
}
},
"nbformat": 4,

View File

@@ -1,231 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cc6caafa",
"metadata": {},
"source": [
"# Fireworks\n",
"\n",
">[Fireworks](https://www.fireworks.ai/) is an AI startup focused on accelerating product development on generative AI by creating an innovative AI experiment and production platform. \n",
"\n",
"This example goes over how to use LangChain to interact with `Fireworks` models."
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "60b6dbb2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms.fireworks import Fireworks, FireworksChat\n",
"from langchain import PromptTemplate, LLMChain\n",
"from langchain.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"import os"
]
},
{
"cell_type": "markdown",
"id": "ccff689e",
"metadata": {},
"source": [
"# Setup\n",
"\n",
"Contact Fireworks AI for the an API Key to access our models\n",
"\n",
"Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-13b-chat."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9ca87a2e",
"metadata": {},
"outputs": [],
"source": [
"# Initialize a Fireworks LLM\n",
"os.environ['FIREWORKS_API_KEY'] = \"\" #change this to your own API KEY\n",
"llm = Fireworks(model_id=\"fireworks-llama-v2-13b-chat\")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "43a11ba8",
"metadata": {},
"outputs": [],
"source": [
"# Create LLM chain\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
]
},
{
"cell_type": "markdown",
"id": "acc24d0c",
"metadata": {},
"source": [
"# Calling the Model\n",
"\n",
"You can use the LLMs to call the model for specified prompt(s). \n",
"\n",
"Current Specified Models: \n",
"* fireworks-falcon-7b, fireworks-falcon-40b-w8a16\n",
"* fireworks-guanaco-30b, fireworks-guanaco-33b-w8a16\n",
"* fireworks-llama-7b, fireworks-llama-13b, fireworks-llama-30b-w8a16\n",
"* fireworks-llama-v2-13b, fireworks-llama-v2-13b-chat, fireworks-llama-v2-13b-w8a16, fireworks-llama-v2-13b-chat-w8a16\n",
"* fireworks-llama-v2-7b, fireworks-llama-v2-7b-chat, fireworks-llama-v2-7b-w8a16, fireworks-llama-v2-7b-chat-w8a16"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "bf0a425c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"It's a question that has been debated for years, with different analysts and fans making their cases for various signal-callers. Here are some of the top contenders for the title of best quarterback in the NFL:\n",
"\n",
"1. Tom Brady: The New England Patriots legend has won six Super Bowls and has been named Super Bowl MVP four times. He's known for his precision passing, pocket presence, and ability to read defenses.\n",
"2. Aaron Rodgers: The Green Bay Packers quarterback has won two Super Bowls and has been named NFL MVP twice. He's known for his quick release, accuracy, and ability to extend plays with his feet.\n",
"3. Drew Brees: The New Orleans Saints quarterback has won a Super Bowl and has been named NFL MVP once. He's known for his accuracy, pocket presence, and ability to read defenses.\n",
"4. Patrick Mahomes: The Kansas City Chiefs quarterback has won a Super Bowl and has been named NFL MVP twice. He's known for his arm strength, athleticism, and ability to make plays outside of the pocket.\n",
"5. Russell Wilson: The Seattle Seahawks quarterback has won a Super Bowl and has been named NFL MVP once. He's known for his mobility, accuracy, and ability to extend plays with his feet.\n",
"\n",
"Of course, there are other talented quarterbacks in the league, such as Lamar Jackson, Deshaun Watson, and Carson Wentz, who could also be considered among the best. Ultimately, the answer to the question of who's the best quarterback in the NFL is subjective and can vary depending on individual perspectives and criteria.\n"
]
}
],
"source": [
"#single prompt\n",
"output = llm(\"Who's the best quarterback in the NFL?\")\n",
"print(output)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "afc7de6f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"generations=[[Generation(text=\"\\nWho is the best cricket player in the world in 2016?\\nThe best cricket player in the world in 2016 is Virat Kohli. The Indian captain has had a fabulous year, scoring heavily in all formats of the game, leading India to several victories, and breaking several records. In Test cricket, Kohli has scored 1215 runs at an average of 75.33 with 6 centuries and 4 fifties, which is the highest number of runs scored by any player in a calendar year. In ODI cricket, he has scored 1143 runs at an average of 83.42 with 7 centuries and 6 fifties, which is also the highest number of runs scored by any player in a calendar year. Additionally, Kohli has led India to the number one ranking in Test cricket, and has been named the ICC Test Player of the Year and the ICC ODI Player of the Year.\\nVirat Kohli has been in incredible form in 2016, and his performances have made him the standout player of the year. Other players who have had a great year include Steve Smith, Joe Root, and Kane Williamson, but Kohli's consistency and dominance in all formats of the game make him the best cricket player in the world in 2016.\", generation_info=None)], [Generation(text=\"\\n\\nA: LeBron James.\\n\\nB: Kevin Durant.\\n\\nC: Steph Curry.\\n\\nD: James Harden.\\n\\nE: Other (please specify).\\n\\nWhat's your answer?\", generation_info=None)]] llm_output={'token_usage': {}, 'model_id': 'fireworks-llama-v2-13b-chat'} run=[RunInfo(run_id=UUID('d14b6bee-7692-46ad-8798-acb6f72fc7fb')), RunInfo(run_id=UUID('b9f5b3b5-9e62-4eaf-b269-ecf0cbbcfb82'))]\n"
]
}
],
"source": [
"#calling multiple prompts\n",
"output = llm.generate([\"Who's the best cricket player in 2016?\", \"Who's the best basketball player in the league?\"])\n",
"print(output)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "b801c20d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Kansas City in December can be quite chilly, with average high\n"
]
}
],
"source": [
"#setting parameters: model_id, temperature, max_tokens, top_p\n",
"llm = Fireworks(model_id=\"fireworks-llama-v2-13b-chat\", temperature=0.7, max_tokens=15, top_p=1.0)\n",
"print(llm(\"What's the weather like in Kansas City in December?\"))"
]
},
{
"cell_type": "markdown",
"id": "137662a6",
"metadata": {},
"source": [
"# Create and Run Chain\n",
"\n",
"Create a prompt template to be used with the LLM Chain. Once this prompt template is created, initialize the chain with the LLM and prompt template, and run the chain with the specified prompts."
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "fd2c6bc1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"(Note: I'm just an AI and not a branding expert, so take this as a starting point for your own research and brainstorming.)\n",
"A good name for a company that makes football helmets could be:\n",
"\n",
"1. Helix Pro: This name plays off the idea of a helix, or spiral, shape that is commonly associated with football helmets. \"Pro\" implies a professional-level product.\n",
"2. Gridiron Gear: \"Gridiron\" is a term used to describe a football field, and \"gear\" highlights the company's focus on producing high-quality football helmets.\n",
"3. Linebacker Lab: \"Linebacker\" is a position on the football field, and \"Lab\" suggests a focus on research and development.\n",
"4. Helmet Hut: This name is simple and easy to remember, and it immediately conveys the company's focus on football helmets.\n",
"5. Tackle Tech: \"Tackle\" is a term used in football to describe a hit or collision, and \"Tech\" implies a focus on advanced technology and innovation.\n",
"6. Victory Vest: \"Victory\" implies a focus on winning and success, and \"Vest\" could suggest a protective or armored design.\n",
"7. Pigskin Pro: \"Pigskin\" is a term used to describe a football, and \"Pro\" implies a professional-level product.\n",
"8. Football Fusion: This name could suggest a combination of different materials or technologies to create a high-quality football helmet.\n",
"9. Endzone Edge: \"Endzone\" is the area of the football field where a team scores a touchdown, and \"Edge\" implies a competitive advantage.\n",
"10. MVP Masks: \"MVP\" stands for \"Most Valuable Player,\" and \"Masks\" highlights the protective nature of the company's football helmets.\n",
"\n",
"Remember, the name you choose for your company should be memorable, easy to pronounce and spell, and convey a sense of quality and professionalism. It's also important to check that the name isn't already in use by another company, and to consider any potential trademark issues.\n"
]
}
],
"source": [
"human_message_prompt = HumanMessagePromptTemplate(\n",
" prompt=PromptTemplate(\n",
" template=\"What is a good name for a company that makes {product}?\",\n",
" input_variables=[\"product\"],\n",
" )\n",
")\n",
"\n",
"chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])\n",
"chat = Fireworks()\n",
"chain = LLMChain(llm=chat, prompt=chat_prompt_template)\n",
"output = chain.run(\"football helmets\")\n",
"\n",
"print(output)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -36,7 +36,7 @@
"## Deployments\n",
"With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.\n",
"\n",
"_**Note**: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the `AzureChatOpenAI` class. For docs on Azure chat see [Azure Chat OpenAI documentation](/docs/integrations/chat/azure_chat_openai)._\n",
"_**Note**: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the `AzureChatOpenAI` class. For docs on Azure chat see [Azure Chat OpenAI documentation](/docs/modules/model_io/models/chat/integrations/azure_chat_openai)._\n",
"\n",
"Let's say your deployment name is `text-davinci-002-prod`. In the `openai` Python API, you can specify this deployment with the `engine` parameter. For example:\n",
"\n",

View File

@@ -28,9 +28,9 @@
"\n",
"To use the wrapper, you must [deploy a model on AzureML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) and obtain the following parameters:\n",
"\n",
"* `endpoint_api_key`: Required - The API key provided by the endpoint\n",
"* `endpoint_url`: Required - The REST endpoint url provided by the endpoint\n",
"* `deployment_name`: Not required - The deployment name of the model using the endpoint"
"* `endpoint_api_key`: The API key provided by the endpoint\n",
"* `endpoint_url`: The REST endpoint url provided by the endpoint\n",
"* `deployment_name`: The deployment name of the endpoint"
]
},
{
@@ -39,14 +39,11 @@
"source": [
"## Content Formatter\n",
"\n",
"The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. The following content formatters are provided:\n",
"The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. Additionally, there are three content formatters already provided:\n",
"\n",
"* `GPT2ContentFormatter`: Formats request and response data for GPT2\n",
"* `DollyContentFormatter`: Formats request and response data for the Dolly-v2\n",
"* `OSSContentFormatter`: Formats request and response data for models from the Open Source category in the Model Catalog. Note, that not all models in the Open Source category may follow the same schema\n",
"* `DollyContentFormatter`: Formats request and response data for the `dolly-v2-12b` model\n",
"* `HFContentFormatter`: Formats request and response data for text-generation Hugging Face models\n",
"* `LLamaContentFormatter`: Formats request and response data for LLaMa2\n",
"\n",
"*Note: `OSSContentFormatter` is being deprecated and replaced with `GPT2ContentFormatter`. The logic is the same but `GPT2ContentFormatter` is a more suitable name. You can still continue to use `OSSContentFormatter` as the changes are backwards compatibile.*\n",
"\n",
"Below is an example using a summarization model from Hugging Face."
]
@@ -103,6 +100,7 @@
"llm = AzureMLOnlineEndpoint(\n",
" endpoint_api_key=os.getenv(\"BART_ENDPOINT_API_KEY\"),\n",
" endpoint_url=os.getenv(\"BART_ENDPOINT_URL\"),\n",
" deployment_name=\"linydub-bart-large-samsum-3\",\n",
" model_kwargs={\"temperature\": 0.8, \"max_new_tokens\": 400},\n",
" content_formatter=content_formatter,\n",
")\n",
@@ -169,6 +167,7 @@
"llm = AzureMLOnlineEndpoint(\n",
" endpoint_api_key=os.getenv(\"DOLLY_ENDPOINT_API_KEY\"),\n",
" endpoint_url=os.getenv(\"DOLLY_ENDPOINT_URL\"),\n",
" deployment_name=\"databricks-dolly-v2-12b-4\",\n",
" model_kwargs={\"temperature\": 0.8, \"max_tokens\": 300},\n",
" content_formatter=content_formatter,\n",
")\n",

View File

@@ -55,11 +55,7 @@
" history=[[\"我将从美国到中国来旅游,出行前希望了解中国的城市\", \"欢迎问我任何问题。\"]],\n",
" top_p=0.9,\n",
" model_kwargs={\"sample_model_args\": False},\n",
")\n",
"\n",
"# turn on with_history only when you want the LLM object to keep track of the conversation history\n",
"# and send the accumulated context to the backend model api, which make it stateful. By default it is stateless.\n",
"# llm.with_history = True"
")"
]
},
{
@@ -99,6 +95,22 @@
"\n",
"llm_chain.run(question)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"By Default, ChatGLM is statful to keep track of the conversation history and send the accumulated context to the model. To enable stateless mode, we could set ChatGLM.with_history as `False` explicitly."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"llm.with_history = False"
]
}
],
"metadata": {

View File

@@ -1,15 +1,18 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Google Cloud Platform Vertex AI PaLM \n",
"\n",
"Note: This is seperate from the Google PaLM integration, it exposes [Vertex AI PaLM API](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) on Google Cloud. \n",
"Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
"\n",
"By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n",
"PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the [GCP Service Specific Terms](https://cloud.google.com/terms/service-terms). \n",
"\n",
"Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the [launch stage descriptions](https://cloud.google.com/products#product-launch-stages). Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview [terms and conditions](https://cloud.google.com/trustedtester/aitos) (Preview Terms).\n",
"\n",
"For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms).\n",
"\n",
"To use Vertex AI PaLM you must have the `google-cloud-aiplatform` Python package installed and either:\n",
"- Have credentials configured for your environment (gcloud, workload identity, etc...)\n",
@@ -98,7 +101,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [

View File

@@ -216,7 +216,7 @@
},
"outputs": [],
"source": [
"from langchain_experimental.llms import JsonFormer\n",
"from langchain.experimental.llms import JsonFormer\n",
"\n",
"json_former = JsonFormer(json_schema=decoder_schema, pipeline=hf_model)"
]

View File

@@ -1,176 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Minimax\n",
"\n",
"[Minimax](https://api.minimax.chat) is a Chinese startup that provides natural language processing models for companies and individuals.\n",
"\n",
"This example demonstrates using Langchain to interact with Minimax."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Setup\n",
"\n",
"To run this notebook, you'll need a [Minimax account](https://api.minimax.chat), an [API key](https://api.minimax.chat/user-center/basic-information/interface-key), and a [Group ID](https://api.minimax.chat/user-center/basic-information)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Single model call"
]
},
{
"cell_type": "code",
"execution_count": 33,
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import Minimax"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
"# Load the model\n",
"minimax = Minimax(minimax_api_key=\"YOUR_API_KEY\", minimax_group_id=\"YOUR_GROUP_ID\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"is_executing": true
}
},
"outputs": [],
"source": [
"# Prompt the model\n",
"minimax(\"What is the difference between panda and bear?\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Chained model calls"
]
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# get api_key and group_id: https://api.minimax.chat/user-center/basic-information\n",
"# We need `MINIMAX_API_KEY` and `MINIMAX_GROUP_ID`\n",
"\n",
"import os\n",
"\n",
"os.environ[\"MINIMAX_API_KEY\"] = \"YOUR_API_KEY\"\n",
"os.environ[\"MINIMAX_GROUP_ID\"] = \"YOUR_GROUP_ID\""
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from langchain.llms import Minimax\n",
"from langchain import PromptTemplate, LLMChain"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"template = \"\"\"Question: {question}\n",
"\n",
"Answer: Let's think step by step.\"\"\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"llm = Minimax()"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"question = \"What NBA team won the Championship in the year Jay Zhou was born?\"\n",
"\n",
"llm_chain.run(question)"
],
"metadata": {
"collapsed": false
}
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -5,7 +5,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# OctoAI Compute Service\n",
"## OctoAI Compute Service\n",
"This example goes over how to use LangChain to interact with `OctoAI` [LLM endpoints](https://octoai.cloud/templates)\n",
"## Environment setup\n",
"\n",

View File

@@ -16,9 +16,7 @@
"metadata": {},
"source": [
"## Install petals\n",
"The `petals` package is required to use the Petals API. Install `petals` using `pip3 install petals`.\n",
"\n",
"For Apple Silicon(M1/M2) users please follow this guide [https://github.com/bigscience-workshop/petals/issues/147#issuecomment-1365379642](https://github.com/bigscience-workshop/petals/issues/147#issuecomment-1365379642) to install petals "
"The `petals` package is required to use the Petals API. Install `petals` using `pip3 install petals`."
]
},
{
@@ -64,7 +62,7 @@
},
"outputs": [
{
"name": "stdout",
"name": "stdin",
"output_type": "stream",
"text": [
" ········\n"

View File

@@ -162,7 +162,7 @@
}
],
"source": [
"from langchain_experimental.llms import RELLM\n",
"from langchain.experimental.llms import RELLM\n",
"\n",
"model = RELLM(pipeline=hf_model, regex=pattern, max_new_tokens=200)\n",
"\n",

File diff suppressed because one or more lines are too long

View File

@@ -1,176 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Xorbits Inference (Xinference)\n",
"\n",
"[Xinference](https://github.com/xorbitsai/inference) is a powerful and versatile library designed to serve LLMs, \n",
"speech recognition models, and multimodal models, even on your laptop. It supports a variety of models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, and many others. This notebook demonstrates how to use Xinference with LangChain."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"Install `Xinference` through PyPI:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install \"xinference[all]\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deploy Xinference Locally or in a Distributed Cluster.\n",
"\n",
"For local deployment, run `xinference`. \n",
"\n",
"To deploy Xinference in a cluster, first start an Xinference supervisor using the `xinference-supervisor`. You can also use the option -p to specify the port and -H to specify the host. The default port is 9997.\n",
"\n",
"Then, start the Xinference workers using `xinference-worker` on each server you want to run them on. \n",
"\n",
"You can consult the README file from [Xinference](https://github.com/xorbitsai/inference) for more information.\n",
"## Wrapper\n",
"\n",
"To use Xinference with LangChain, you need to first launch a model. You can use command line interface (CLI) to do so:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model uid: 7167b2b0-2a04-11ee-83f0-d29396a3f064\n"
]
}
],
"source": [
"!xinference launch -n vicuna-v1.3 -f ggmlv3 -q q4_0"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A model UID is returned for you to use. Now you can use Xinference with LangChain:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' You can visit the Eiffel Tower, Notre-Dame Cathedral, the Louvre Museum, and many other historical sites in Paris, the capital of France.'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.llms import Xinference\n",
"\n",
"llm = Xinference(\n",
" server_url=\"http://0.0.0.0:9997\",\n",
" model_uid = \"7167b2b0-2a04-11ee-83f0-d29396a3f064\"\n",
")\n",
"\n",
"llm(\n",
" prompt=\"Q: where can we visit in the capital of France? A:\",\n",
" generate_config={\"max_tokens\": 1024, \"stream\": True},\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Integrate with a LLMChain"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"A: You can visit many places in Paris, such as the Eiffel Tower, the Louvre Museum, Notre-Dame Cathedral, the Champs-Elysées, Montmartre, Sacré-Cœur, and the Palace of Versailles.\n"
]
}
],
"source": [
"from langchain import PromptTemplate, LLMChain\n",
"\n",
"template = \"Where can we visit in the capital of {country}?\"\n",
"\n",
"prompt = PromptTemplate(template=template, input_variables=[\"country\"])\n",
"\n",
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
"\n",
"generated = llm_chain.run(country=\"France\")\n",
"print(generated)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lastly, terminate the model when you do not need to use it:"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"!xinference terminate --model-uid \"7167b2b0-2a04-11ee-83f0-d29396a3f064\""
]
}
],
"metadata": {
"kernelspec": {
"display_name": "myenv3.9",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.11"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -1,61 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "91c6a7ef",
"metadata": {},
"source": [
"# Streamlit Chat Message History\n",
"\n",
"This notebook goes over how to use Streamlit to store chat message history. Note, StreamlitChatMessageHistory only works when run in a Streamlit app. For more on Streamlit check out their\n",
"[getting started documentation](https://docs.streamlit.io/library/get-started)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d15e3302",
"metadata": {},
"outputs": [],
"source": [
"from langchain.memory import StreamlitChatMessageHistory\n",
"\n",
"history = StreamlitChatMessageHistory(\"foo\")\n",
"\n",
"history.add_user_message(\"hi!\")\n",
"history.add_ai_message(\"whats up?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "64fc465e",
"metadata": {},
"outputs": [],
"source": [
"history.messages"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"language": "python",
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -177,7 +177,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.11.3"
}
},
"nbformat": 4,

View File

@@ -22,7 +22,7 @@ Have `docker desktop` installed.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/airbyte_json).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/airbyte_json.html).
```python
from langchain.document_loaders import AirbyteJSONLoader

View File

@@ -25,4 +25,4 @@ pip install pyairtable
from langchain.document_loaders import AirtableLoader
```
See an [example](/docs/integrations/document_loaders/airtable.html).
See an [example](/docs/modules/data_connection/document_loaders/integrations/airtable.html).

Some files were not shown because too many files have changed in this diff Show More