mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-04 00:00:34 +00:00
Compare commits
195 Commits
eugene/exp
...
wfh/embedd
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b3d30eaa7c | ||
|
|
5efe913936 | ||
|
|
8c35fcb571 | ||
|
|
e45be8b3f6 | ||
|
|
0d5a90f30a | ||
|
|
6b007e2829 | ||
|
|
be638ad77d | ||
|
|
115a77142a | ||
|
|
f0b0c72d98 | ||
|
|
6aee589eec | ||
|
|
5b7ff215e8 | ||
|
|
0f0ccfe7f6 | ||
|
|
2759e2d857 | ||
|
|
0f68054401 | ||
|
|
0ead8ea708 | ||
|
|
c7ea6e9ff8 | ||
|
|
812419d946 | ||
|
|
873a80e496 | ||
|
|
d1b95db874 | ||
|
|
6c3573e7f6 | ||
|
|
179a39954d | ||
|
|
6f0bccfeb5 | ||
|
|
e68a1d73d0 | ||
|
|
29f51055e8 | ||
|
|
5d765408ce | ||
|
|
404d103c41 | ||
|
|
47eea32f6a | ||
|
|
b786335dd1 | ||
|
|
f81e613086 | ||
|
|
8ef7e14a85 | ||
|
|
53e4148a1b | ||
|
|
4e8f11b36a | ||
|
|
2928a1a3c9 | ||
|
|
814faa9de5 | ||
|
|
8a8917e0d9 | ||
|
|
b2b71b0d35 | ||
|
|
8022293124 | ||
|
|
1f54ec899b | ||
|
|
a137492b53 | ||
|
|
e283dc8d50 | ||
|
|
81e0cbf2d5 | ||
|
|
37aade19da | ||
|
|
43dffe39fb | ||
|
|
71f98db2fe | ||
|
|
f68f3b23d7 | ||
|
|
206901fa01 | ||
|
|
7ea2b08d1f | ||
|
|
368aa4ede7 | ||
|
|
5018af8839 | ||
|
|
96b0ff182e | ||
|
|
59194c2214 | ||
|
|
ee1d13678e | ||
|
|
1335f2b9f8 | ||
|
|
16551536e3 | ||
|
|
c5fb3b6069 | ||
|
|
1ec0b18379 | ||
|
|
f31047a394 | ||
|
|
5c516945d0 | ||
|
|
d2adec3818 | ||
|
|
7c5c0557cb | ||
|
|
68113348cc | ||
|
|
b574507c51 | ||
|
|
31820a31e4 | ||
|
|
13ccf202de | ||
|
|
6705928b9d | ||
|
|
8961c720b8 | ||
|
|
73072d3db8 | ||
|
|
2de028834f | ||
|
|
a7000ee89e | ||
|
|
9c2b29a1cb | ||
|
|
f190bc3e83 | ||
|
|
7df2dfc4c2 | ||
|
|
ed9a0f8185 | ||
|
|
465faab935 | ||
|
|
0ec020698f | ||
|
|
bd2e298468 | ||
|
|
66226d1d4d | ||
|
|
e83250cc5f | ||
|
|
2a26cc6d2b | ||
|
|
3fbb737bb3 | ||
|
|
53f3793504 | ||
|
|
ec40ead980 | ||
|
|
ff5024634e | ||
|
|
1e8fca5518 | ||
|
|
8061994c61 | ||
|
|
8d2344db43 | ||
|
|
b4a126ae71 | ||
|
|
7e70cd2a28 | ||
|
|
de61ebd9e0 | ||
|
|
c19a0b9c10 | ||
|
|
5a490a79f4 | ||
|
|
64d0a0fcc0 | ||
|
|
bca0749a11 | ||
|
|
07d6d1ca38 | ||
|
|
144b4c0c78 | ||
|
|
844eca98d5 | ||
|
|
1ab773c742 | ||
|
|
15de57b848 | ||
|
|
4780156955 | ||
|
|
5e3b968078 | ||
|
|
913a156cff | ||
|
|
893f3014af | ||
|
|
a8be207ea3 | ||
|
|
6556a8fcfd | ||
|
|
a795c3d860 | ||
|
|
9975ba4124 | ||
|
|
7f9c6c3baa | ||
|
|
b2f8a5bae9 | ||
|
|
e98e2b2b81 | ||
|
|
529cb2e30c | ||
|
|
04ebdbe98f | ||
|
|
08f5e6b801 | ||
|
|
4923cf029a | ||
|
|
549720ae51 | ||
|
|
18a2452121 | ||
|
|
4d526c49ed | ||
|
|
8f14ddefdf | ||
|
|
490ad93b3c | ||
|
|
b65a9414bb | ||
|
|
ae4638aa35 | ||
|
|
872abb4198 | ||
|
|
412fa4e1db | ||
|
|
b7c0eb9ecb | ||
|
|
13b4f465e2 | ||
|
|
7d79178827 | ||
|
|
d935573362 | ||
|
|
3314f54383 | ||
|
|
f63240649c | ||
|
|
17953ab61f | ||
|
|
2448043b84 | ||
|
|
3892cefac6 | ||
|
|
8ee56b9a5b | ||
|
|
b7d6e1909c | ||
|
|
62b8b459c6 | ||
|
|
2311d57df4 | ||
|
|
7124377524 | ||
|
|
abe4c361f9 | ||
|
|
e0de62f6da | ||
|
|
2db2987b1b | ||
|
|
fab24457bc | ||
|
|
3a78450883 | ||
|
|
af7e70d4af | ||
|
|
06bdbe06fe | ||
|
|
e62a1686e2 | ||
|
|
760c278fe0 | ||
|
|
61dd92f821 | ||
|
|
394b67ab92 | ||
|
|
d5884017a9 | ||
|
|
0ad2d5f27a | ||
|
|
82df923f37 | ||
|
|
1b0bfa54cf | ||
|
|
c7ff5f19a8 | ||
|
|
a221a9ced0 | ||
|
|
a1a650c743 | ||
|
|
1efb9bae5f | ||
|
|
877d384bc9 | ||
|
|
e66759cc9d | ||
|
|
ecd4aae818 | ||
|
|
6dd18eee26 | ||
|
|
a003a0baf6 | ||
|
|
e758e9e7f5 | ||
|
|
caa6caeb8a | ||
|
|
25b8cc7e3d | ||
|
|
d7e6770de8 | ||
|
|
594f195e54 | ||
|
|
ba4e82bb47 | ||
|
|
dc3ca44e05 | ||
|
|
b2e4b9dca4 | ||
|
|
cddd8ae83d | ||
|
|
f499e6ea6a | ||
|
|
539574670c | ||
|
|
2ab13ab743 | ||
|
|
01217b2247 | ||
|
|
55beab326c | ||
|
|
41524304bf | ||
|
|
f5bf893035 | ||
|
|
0e9e5b5202 | ||
|
|
68763bd25f | ||
|
|
ee902ba7b2 | ||
|
|
ff98fad2d9 | ||
|
|
94a693e2ee | ||
|
|
0eca3e7d90 | ||
|
|
cf608f876b | ||
|
|
1bbadde77b | ||
|
|
5918c2ffc0 | ||
|
|
097538882d | ||
|
|
619d6c0b14 | ||
|
|
b28610c13a | ||
|
|
f273c99158 | ||
|
|
32ca9dce3e | ||
|
|
d5ad0d2421 | ||
|
|
ef930eda9a | ||
|
|
08cf728e57 | ||
|
|
43d835dda4 | ||
|
|
472b434f02 |
@@ -15,7 +15,11 @@ You may use the button above, or follow these steps to open this repo in a Codes
|
||||
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
|
||||
|
||||
## VS Code Dev Containers
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
|
||||
|
||||
Note: If you click this link you will open the main repo and not your local cloned repo, you can use this link and replace with your username and cloned repo name:
|
||||
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/<yourusername>/<yourclonedreponame>
|
||||
|
||||
|
||||
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
|
||||
|
||||
@@ -25,7 +29,7 @@ You can also follow these steps to open this repo in a container using the VS Co
|
||||
|
||||
2. Open a locally cloned copy of the code:
|
||||
|
||||
- Clone this repository to your local filesystem.
|
||||
- Fork and Clone this repository to your local filesystem.
|
||||
- Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command.
|
||||
- Select the cloned copy of this folder, wait for the container to start, and try things out!
|
||||
|
||||
|
||||
1
.github/workflows/_release.yml
vendored
1
.github/workflows/_release.yml
vendored
@@ -37,6 +37,7 @@ jobs:
|
||||
echo version=$(poetry version --short) >> $GITHUB_OUTPUT
|
||||
- name: Create Release
|
||||
uses: ncipollo/release-action@v1
|
||||
if: ${{ inputs.working-directory == 'libs/langchain' }}
|
||||
with:
|
||||
artifacts: "dist/*"
|
||||
token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -162,6 +162,7 @@ docs/.docusaurus/
|
||||
docs/.cache-loader/
|
||||
docs/_dist
|
||||
docs/api_reference/api_reference.rst
|
||||
docs/api_reference/experimental_api_reference.rst
|
||||
docs/api_reference/_build
|
||||
docs/api_reference/*/
|
||||
!docs/api_reference/_static/
|
||||
|
||||
@@ -43,6 +43,10 @@ Now:
|
||||
|
||||
`from langchain_experimental.sql import SQLDatabaseChain`
|
||||
|
||||
Alternatively, if you are just interested in using the query generation part of the SQL chain, you can check out [`create_sql_query_chain`](https://github.com/langchain-ai/langchain/blob/master/docs/extras/use_cases/tabular/sql_query.ipynb)
|
||||
|
||||
`from langchain.chains import create_sql_query_chain`
|
||||
|
||||
## `load_prompt` for Python files
|
||||
|
||||
Note: this only applies if you want to load Python files as prompts.
|
||||
|
||||
@@ -12,14 +12,14 @@
|
||||
[](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
|
||||
[](https://codespaces.new/hwchase17/langchain)
|
||||
[](https://star-history.com/#hwchase17/langchain)
|
||||
[](https://libraries.io/github/hwchase17/langchain)
|
||||
[](https://libraries.io/github/langchain-ai/langchain)
|
||||
[](https://github.com/hwchase17/langchain/issues)
|
||||
|
||||
|
||||
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
|
||||
|
||||
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
|
||||
Please fill out [this form](https://6w1pwbss0py.typeform.com/to/rrbrdTH2) and we'll set up a dedicated support Slack channel.
|
||||
**Production Support:** As you move your LangChains into production, we'd love to offer more hands-on support.
|
||||
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to share more about what you're building, and our team will get in touch.
|
||||
|
||||
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28
|
||||
|
||||
|
||||
@@ -13,5 +13,6 @@ cp -r {docs_skeleton,snippets} _dist
|
||||
cp -r extras/* _dist/docs_skeleton/docs
|
||||
cd _dist/docs_skeleton
|
||||
poetry run nbdoc_build
|
||||
poetry run python generate_api_reference_links.py
|
||||
yarn install
|
||||
yarn start
|
||||
|
||||
@@ -23,6 +23,7 @@ from sphinx.util.docutils import SphinxDirective
|
||||
_DIR = Path(__file__).parent.absolute()
|
||||
sys.path.insert(0, os.path.abspath("."))
|
||||
sys.path.insert(0, os.path.abspath("../../libs/langchain"))
|
||||
sys.path.insert(0, os.path.abspath("../../libs/experimental"))
|
||||
|
||||
with (_DIR.parents[1] / "libs" / "langchain" / "pyproject.toml").open("r") as f:
|
||||
data = toml.load(f)
|
||||
|
||||
@@ -5,13 +5,15 @@ from pathlib import Path
|
||||
|
||||
ROOT_DIR = Path(__file__).parents[2].absolute()
|
||||
PKG_DIR = ROOT_DIR / "libs" / "langchain" / "langchain"
|
||||
EXP_DIR = ROOT_DIR / "libs" / "experimental" / "langchain_experimental"
|
||||
WRITE_FILE = Path(__file__).parent / "api_reference.rst"
|
||||
EXP_WRITE_FILE = Path(__file__).parent / "experimental_api_reference.rst"
|
||||
|
||||
|
||||
def load_members() -> dict:
|
||||
def load_members(dir: Path) -> dict:
|
||||
members: dict = {}
|
||||
for py in glob.glob(str(PKG_DIR) + "/**/*.py", recursive=True):
|
||||
module = py[len(str(PKG_DIR)) + 1 :].replace(".py", "").replace("/", ".")
|
||||
for py in glob.glob(str(dir) + "/**/*.py", recursive=True):
|
||||
module = py[len(str(dir)) + 1 :].replace(".py", "").replace("/", ".")
|
||||
top_level = module.split(".")[0]
|
||||
if top_level not in members:
|
||||
members[top_level] = {"classes": [], "functions": []}
|
||||
@@ -26,12 +28,10 @@ def load_members() -> dict:
|
||||
return members
|
||||
|
||||
|
||||
def construct_doc(members: dict) -> str:
|
||||
full_doc = """\
|
||||
.. _api_reference:
|
||||
|
||||
def construct_doc(pkg: str, members: dict) -> str:
|
||||
full_doc = f"""\
|
||||
=============
|
||||
API Reference
|
||||
``{pkg}`` API Reference
|
||||
=============
|
||||
|
||||
"""
|
||||
@@ -40,12 +40,12 @@ API Reference
|
||||
functions = _members["functions"]
|
||||
if not (classes or functions):
|
||||
continue
|
||||
section = f":mod:`langchain.{module}`"
|
||||
section = f":mod:`{pkg}.{module}`"
|
||||
full_doc += f"""\
|
||||
{section}
|
||||
{'=' * (len(section) + 1)}
|
||||
|
||||
.. automodule:: langchain.{module}
|
||||
.. automodule:: {pkg}.{module}
|
||||
:no-members:
|
||||
:no-inherited-members:
|
||||
|
||||
@@ -56,7 +56,7 @@ API Reference
|
||||
full_doc += f"""\
|
||||
Classes
|
||||
--------------
|
||||
.. currentmodule:: langchain
|
||||
.. currentmodule:: {pkg}
|
||||
|
||||
.. autosummary::
|
||||
:toctree: {module}
|
||||
@@ -70,7 +70,7 @@ Classes
|
||||
full_doc += f"""\
|
||||
Functions
|
||||
--------------
|
||||
.. currentmodule:: langchain
|
||||
.. currentmodule:: {pkg}
|
||||
|
||||
.. autosummary::
|
||||
:toctree: {module}
|
||||
@@ -83,10 +83,14 @@ Functions
|
||||
|
||||
|
||||
def main() -> None:
|
||||
members = load_members()
|
||||
full_doc = construct_doc(members)
|
||||
lc_members = load_members(PKG_DIR)
|
||||
lc_doc = ".. _api_reference:\n\n" + construct_doc("langchain", lc_members)
|
||||
with open(WRITE_FILE, "w") as f:
|
||||
f.write(full_doc)
|
||||
f.write(lc_doc)
|
||||
exp_members = load_members(EXP_DIR)
|
||||
exp_doc = ".. _experimental_api_reference:\n\n" + construct_doc("langchain_experimental", exp_members)
|
||||
with open(EXP_WRITE_FILE, "w") as f:
|
||||
f.write(exp_doc)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -45,6 +45,9 @@
|
||||
<li class="nav-item">
|
||||
<a class="sk-nav-link nav-link" href="{{ pathto('api_reference') }}">API</a>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<a class="sk-nav-link nav-link" href="{{ pathto('experimental_api_reference') }}">Experimental</a>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<a class="sk-nav-link nav-link" target="_blank" rel="noopener noreferrer" href="https://python.langchain.com/">Python Docs</a>
|
||||
</li>
|
||||
|
||||
@@ -0,0 +1,9 @@
|
||||
# LangChain Expression Language
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
LangChain Expression Language is a declarative way to easily compose chains together.
|
||||
Any chain constructed this way will automatically have full sync, async, and streaming support.
|
||||
See guides below for how to interact with chains constructed this way as well as cookbook examples.
|
||||
|
||||
<DocCardList />
|
||||
6
docs/docs_skeleton/docs/guides/safety/index.mdx
Normal file
6
docs/docs_skeleton/docs/guides/safety/index.mdx
Normal file
@@ -0,0 +1,6 @@
|
||||
# Preventing harmful outputs
|
||||
|
||||
One of the key concerns with using LLMs is that they may generate harmful or unethical text. This is an area of active research in the field. Here we present some built-in chains inspired by this research, which are intended to make the outputs of LLMs safer.
|
||||
|
||||
- [Moderation chain](/docs/use_cases/safety/moderation): Explicitly check if any output text is harmful and flag it.
|
||||
- [Constitutional chain](/docs/use_cases/safety/constitutional_chain): Prompt the model with a set of principles which should guide it's behavior.
|
||||
@@ -28,7 +28,7 @@ navigating around a browser.
|
||||
### [OpenAI Functions](/docs/modules/agents/agent_types/openai_functions_agent.html)
|
||||
|
||||
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been explicitly fine-tuned to detect when a
|
||||
function should to be called and respond with the inputs that should be passed to the function.
|
||||
function should be called and respond with the inputs that should be passed to the function.
|
||||
The OpenAI Functions Agent is designed to work with these models.
|
||||
|
||||
### [Conversational](/docs/modules/agents/agent_types/chat_conversation_agent.html)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# OpenAI functions
|
||||
|
||||
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should to be called and respond with the inputs that should be passed to the function.
|
||||
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should be called and respond with the inputs that should be passed to the function.
|
||||
In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions.
|
||||
The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.
|
||||
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 4
|
||||
---
|
||||
# Additional
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@@ -1,7 +0,0 @@
|
||||
# Dynamically selecting from multiple prompts
|
||||
|
||||
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the `MultiPromptChain` to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.
|
||||
|
||||
import Example from "@snippets/modules/chains/additional/multi_prompt_router.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -1,6 +1,6 @@
|
||||
# Sequential
|
||||
|
||||
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
|
||||
|
||||
|
||||
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
|
||||
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
# Popular
|
||||
|
||||
import DocCardList from "@theme/DocCardList";
|
||||
|
||||
<DocCardList />
|
||||
@@ -1,8 +0,0 @@
|
||||
# Summarization
|
||||
|
||||
A summarization chain can be used to summarize multiple documents. One way is to input multiple smaller documents, after they have been divided into chunks, and operate over them with a MapReduceDocumentsChain. You can also choose instead for the chain that does summarization to be a StuffDocumentsChain, or a RefineDocumentsChain.
|
||||
|
||||
import Example from "@snippets/modules/chains/popular/summarize.mdx"
|
||||
|
||||
<Example/>
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
label: 'Integrations'
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# Chat Messages
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/memory/) for documentation on built-in memory integrations with 3rd-party databases and tools.
|
||||
:::
|
||||
|
||||
One of the core utility classes underpinning most (if not all) memory modules is the `ChatMessageHistory` class.
|
||||
This is a super lightweight wrapper which exposes convenience methods for saving Human messages, AI messages, and then fetching them all.
|
||||
|
||||
You may want to use this class directly if you are managing memory outside of a chain.
|
||||
|
||||
import GetStarted from "@snippets/modules/memory/chat_messages/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
@@ -1,34 +1,62 @@
|
||||
---
|
||||
sidebar_position: 3
|
||||
---
|
||||
|
||||
# Memory
|
||||
|
||||
🚧 _Docs under construction_ 🚧
|
||||
Most LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation.
|
||||
At bare minimum, a conversational system should be able to access some window of past messages directly.
|
||||
A more complex system will need to have a world model that it is constantly updating, which allows it to do things like maintain information about entities and their relationships.
|
||||
|
||||
:::info
|
||||
Head to [Integrations](/docs/integrations/memory/) for documentation on built-in memory integrations with 3rd-party tools.
|
||||
:::
|
||||
We call this ability to store information about past interactions "memory".
|
||||
LangChain provides a lot of utilities for adding memory to a system.
|
||||
These utilities can be used by themselves or incorporated seamlessly into a chain.
|
||||
|
||||
By default, Chains and Agents are stateless,
|
||||
meaning that they treat each incoming query independently (like the underlying LLMs and chat models themselves).
|
||||
In some applications, like chatbots, it is essential
|
||||
to remember previous interactions, both in the short and long-term.
|
||||
The **Memory** class does exactly that.
|
||||
A memory system needs to support two basic actions: reading and writing.
|
||||
Recall that every chain defines some core execution logic that expects certain inputs.
|
||||
Some of these inputs come directly from the user, but some of these inputs can come from memory.
|
||||
A chain will interact with its memory system twice in a given run.
|
||||
1. AFTER receiving the initial user inputs but BEFORE executing the core logic, a chain will READ from its memory system and augment the user inputs.
|
||||
2. AFTER executing the core logic but BEFORE returning the answer, a chain will WRITE the inputs and outputs of the current run to memory, so that they can be referred to in future runs.
|
||||
|
||||
LangChain provides memory components in two forms.
|
||||
First, LangChain provides helper utilities for managing and manipulating previous chat messages.
|
||||
These are designed to be modular and useful regardless of how they are used.
|
||||
Secondly, LangChain provides easy ways to incorporate these utilities into chains.
|
||||

|
||||
|
||||
|
||||
## Building memory into a system
|
||||
The two core design decisions in any memory system are:
|
||||
- How state is stored
|
||||
- How state is queried
|
||||
|
||||
### Storing: List of chat messages
|
||||
Underlying any memory is a history of all chat interactions.
|
||||
Even if these are not all used directly, they need to be stored in some form.
|
||||
One of the key parts of the LangChain memory module is a series of integrations for storing these chat messages,
|
||||
from in-memory lists to persistent databases.
|
||||
|
||||
- [Chat message storage](/docs/modules/memory/chat_messages/): How to work with Chat Messages, and the various integrations offered
|
||||
|
||||
### Querying: Data structures and algorithms on top of chat messages
|
||||
Keeping a list of chat messages is fairly straight-forward.
|
||||
What is less straight-forward are the data structures and algorithms built on top of chat messages that serve a view of those messages that is most useful.
|
||||
|
||||
A very simply memory system might just return the most recent messages each run. A slightly more complex memory system might return a succinct summary of the past K messages.
|
||||
An even more sophisticated system might extract entities from stored messages and only return information about entities referenced in the current run.
|
||||
|
||||
Each application can have different requirements for how memory is queried. The memory module should make it easy to both get started with simple memory systems and write your own custom systems if needed.
|
||||
|
||||
- [Memory types](/docs/modules/memory/types/): The various data structures and algorithms that make up the memory types LangChain supports
|
||||
|
||||
## Get started
|
||||
|
||||
Memory involves keeping a concept of state around throughout a user's interactions with an language model. A user's interactions with a language model are captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. There are many different ways to do this, each of which exists as its own memory type.
|
||||
|
||||
In general, for each type of memory there are two ways to understanding using memory. These are the standalone functions which extract information from a sequence of messages, and then there is the way you can use this type of memory in a chain.
|
||||
|
||||
Memory can return multiple pieces of information (for example, the most recent N messages and a summary of all previous messages). The returned information can either be a string or a list of messages.
|
||||
Let's take a look at what Memory actually looks like in LangChain.
|
||||
Here we'll cover the basics of interacting with an arbitrary memory class.
|
||||
|
||||
import GetStarted from "@snippets/modules/memory/get_started.mdx"
|
||||
|
||||
<GetStarted/>
|
||||
|
||||
## Next steps
|
||||
|
||||
And that's it for getting started!
|
||||
Please see the other sections for walkthroughs of more advanced topics,
|
||||
like custom memory, multiple memories, and more.
|
||||
|
||||
|
||||
@@ -4,6 +4,6 @@ This notebook shows how to use `ConversationBufferMemory`. This memory allows fo
|
||||
|
||||
We can first extract it as a string.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/buffer.mdx"
|
||||
import Example from "@snippets/modules/memory/types/buffer.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -4,6 +4,6 @@
|
||||
|
||||
Let's first explore the basic functionality of this type of memory.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/buffer_window.mdx"
|
||||
import Example from "@snippets/modules/memory/types/buffer_window.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -4,6 +4,6 @@ Entity Memory remembers given facts about specific entities in a conversation. I
|
||||
|
||||
Let's first walk through using this functionality.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/entity_summary_memory.mdx"
|
||||
import Example from "@snippets/modules/memory/types/entity_summary_memory.mdx"
|
||||
|
||||
<Example/>
|
||||
8
docs/docs_skeleton/docs/modules/memory/types/index.mdx
Normal file
8
docs/docs_skeleton/docs/modules/memory/types/index.mdx
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
sidebar_position: 2
|
||||
---
|
||||
# Memory Types
|
||||
|
||||
There are many different types of memory.
|
||||
Each have their own parameters, their own return types, and are useful in different scenarios.
|
||||
Please see their individual page for more detail on each one.
|
||||
@@ -4,6 +4,6 @@ Conversation summary memory summarizes the conversation as it happens and stores
|
||||
|
||||
Let's first explore the basic functionality of this type of memory.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/summary.mdx"
|
||||
import Example from "@snippets/modules/memory/types/summary.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -6,6 +6,6 @@ This differs from most of the other Memory classes in that it doesn't explicitly
|
||||
|
||||
In this case, the "docs" are previous conversation snippets. This can be useful to refer to relevant pieces of information that the AI was told earlier in the conversation.
|
||||
|
||||
import Example from "@snippets/modules/memory/how_to/vectorstore_retriever_memory.mdx"
|
||||
import Example from "@snippets/modules/memory/types/vectorstore_retriever_memory.mdx"
|
||||
|
||||
<Example/>
|
||||
@@ -0,0 +1 @@
|
||||
label: 'How to'
|
||||
@@ -2,7 +2,7 @@
|
||||
sidebar_position: 2
|
||||
---
|
||||
|
||||
# Conversational Retrieval QA
|
||||
# Store and reference chat history
|
||||
The ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.
|
||||
|
||||
It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question answering chain to return a response.
|
||||
@@ -1,4 +1,4 @@
|
||||
# Dynamically selecting from multiple retrievers
|
||||
# Dynamically select from multiple retrievers
|
||||
|
||||
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the `MultiRetrievalQAChain` to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Document QA
|
||||
# QA over in-memory documents
|
||||
|
||||
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](/docs/modules/chains/document/).
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# Retrieval QA
|
||||
# QA using a Retriever
|
||||
|
||||
This example showcases question answering over an index.
|
||||
|
||||
@@ -5,6 +5,7 @@ import logging
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
import argparse
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -14,7 +15,12 @@ _BASE_URL = "https://api.python.langchain.com/en/latest/"
|
||||
# Regular expression to match Python code blocks
|
||||
code_block_re = re.compile(r"^(```python\n)(.*?)(```\n)", re.DOTALL | re.MULTILINE)
|
||||
# Regular expression to match langchain import lines
|
||||
_IMPORT_RE = re.compile(r"(from\s+(langchain\.\w+(\.\w+)*?)\s+import\s+)(\w+)")
|
||||
_IMPORT_RE = re.compile(
|
||||
r"from\s+(langchain\.\w+(\.\w+)*?)\s+import\s+"
|
||||
r"((?:\w+(?:,\s*)?)*" # Match zero or more words separated by a comma+optional ws
|
||||
r"(?:\s*\(.*?\))?)", # Match optional parentheses block
|
||||
re.DOTALL, # Match newlines as well
|
||||
)
|
||||
|
||||
_CURRENT_PATH = Path(__file__).parent.absolute()
|
||||
# Directory where generated markdown files are stored
|
||||
@@ -24,6 +30,10 @@ _JSON_PATH = _CURRENT_PATH.parent / "api_reference" / "guide_imports.json"
|
||||
|
||||
def find_files(path):
|
||||
"""Find all MDX files in the given path"""
|
||||
# Check if is file first
|
||||
if os.path.isfile(path):
|
||||
yield path
|
||||
return
|
||||
for root, _, files in os.walk(path):
|
||||
for file in files:
|
||||
if file.endswith(".mdx") or file.endswith(".md"):
|
||||
@@ -37,20 +47,33 @@ def get_full_module_name(module_path, class_name):
|
||||
return inspect.getmodule(class_).__name__
|
||||
|
||||
|
||||
def get_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"--docs_dir",
|
||||
type=str,
|
||||
default=_DOCS_DIR,
|
||||
help="Directory where generated markdown files are stored",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function"""
|
||||
args = get_args()
|
||||
global_imports = {}
|
||||
|
||||
for file in find_files(_DOCS_DIR):
|
||||
for file in find_files(args.docs_dir):
|
||||
print(f"Adding links for imports in {file}")
|
||||
|
||||
# replace_imports now returns the import information rather than writing it to a file
|
||||
file_imports = replace_imports(file)
|
||||
|
||||
if file_imports:
|
||||
# Use relative file path as key
|
||||
relative_path = os.path.relpath(file, _DOCS_DIR)
|
||||
doc_url = f"https://python.langchain.com/docs/{relative_path.replace('.mdx', '').replace('.md', '')}"
|
||||
relative_path = (
|
||||
os.path.relpath(file, _DOCS_DIR).replace(".mdx", "").replace(".md", "")
|
||||
)
|
||||
|
||||
doc_url = f"https://python.langchain.com/docs/{relative_path}"
|
||||
for import_info in file_imports:
|
||||
doc_title = import_info["title"]
|
||||
class_name = import_info["imported"]
|
||||
@@ -59,6 +82,7 @@ def main():
|
||||
global_imports[class_name][doc_title] = doc_url
|
||||
|
||||
# Write the global imports information to a JSON file
|
||||
_JSON_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
with _JSON_PATH.open("w") as f:
|
||||
json.dump(global_imports, f)
|
||||
|
||||
@@ -76,7 +100,8 @@ def _get_doc_title(data: str, file_name: str) -> str:
|
||||
|
||||
|
||||
def replace_imports(file):
|
||||
"""Replace imports in each Python code block with links to their documentation and append the import info in a comment"""
|
||||
"""Replace imports in each Python code block with links to their
|
||||
documentation and append the import info in a comment"""
|
||||
all_imports = []
|
||||
with open(file, "r") as f:
|
||||
data = f.read()
|
||||
@@ -96,37 +121,45 @@ def replace_imports(file):
|
||||
# Process imports in the code block
|
||||
imports = []
|
||||
for import_match in _IMPORT_RE.finditer(code):
|
||||
class_name = import_match.group(4)
|
||||
try:
|
||||
module_path = get_full_module_name(import_match.group(2), class_name)
|
||||
except AttributeError as e:
|
||||
logger.warning(f"Could not find module for {class_name}, {e}")
|
||||
continue
|
||||
except ImportError as e:
|
||||
# Some CentOS OpenSSL issues can cause this to fail
|
||||
logger.warning(f"Failed to load for class {class_name}, {e}")
|
||||
continue
|
||||
module = import_match.group(1)
|
||||
imports_str = (
|
||||
import_match.group(3).replace("(\n", "").replace("\n)", "")
|
||||
) # Handle newlines within parentheses
|
||||
# remove any newline and spaces, then split by comma
|
||||
imported_classes = [
|
||||
imp.strip()
|
||||
for imp in re.split(r",\s*", imports_str.replace("\n", ""))
|
||||
if imp.strip()
|
||||
]
|
||||
for class_name in imported_classes:
|
||||
try:
|
||||
module_path = get_full_module_name(module, class_name)
|
||||
except AttributeError as e:
|
||||
logger.warning(f"Could not find module for {class_name}, {e}")
|
||||
continue
|
||||
except ImportError as e:
|
||||
logger.warning(f"Failed to load for class {class_name}, {e}")
|
||||
continue
|
||||
|
||||
url = (
|
||||
_BASE_URL
|
||||
+ "/"
|
||||
+ module_path.split(".")[1]
|
||||
+ "/"
|
||||
+ module_path
|
||||
+ "."
|
||||
+ class_name
|
||||
+ ".html"
|
||||
)
|
||||
url = (
|
||||
_BASE_URL
|
||||
+ module_path.split(".")[1]
|
||||
+ "/"
|
||||
+ module_path
|
||||
+ "."
|
||||
+ class_name
|
||||
+ ".html"
|
||||
)
|
||||
|
||||
# Add the import information to our list
|
||||
imports.append(
|
||||
{
|
||||
"imported": class_name,
|
||||
"source": import_match.group(2),
|
||||
"docs": url,
|
||||
"title": _DOC_TITLE,
|
||||
}
|
||||
)
|
||||
# Add the import information to our list
|
||||
imports.append(
|
||||
{
|
||||
"imported": class_name,
|
||||
"source": module,
|
||||
"docs": url,
|
||||
"title": _DOC_TITLE,
|
||||
}
|
||||
)
|
||||
|
||||
if imports:
|
||||
all_imports.extend(imports)
|
||||
|
||||
BIN
docs/docs_skeleton/static/img/chat_use_case.png
Normal file
BIN
docs/docs_skeleton/static/img/chat_use_case.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 93 KiB |
BIN
docs/docs_skeleton/static/img/chat_use_case_2.png
Normal file
BIN
docs/docs_skeleton/static/img/chat_use_case_2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 102 KiB |
BIN
docs/docs_skeleton/static/img/memory_diagram.png
Normal file
BIN
docs/docs_skeleton/static/img/memory_diagram.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 111 KiB |
BIN
docs/docs_skeleton/static/img/summarization_use_case_1.png
Normal file
BIN
docs/docs_skeleton/static/img/summarization_use_case_1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 196 KiB |
BIN
docs/docs_skeleton/static/img/summarization_use_case_2.png
Normal file
BIN
docs/docs_skeleton/static/img/summarization_use_case_2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 90 KiB |
BIN
docs/docs_skeleton/static/img/summarization_use_case_3.png
Normal file
BIN
docs/docs_skeleton/static/img/summarization_use_case_3.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 174 KiB |
@@ -1610,59 +1610,59 @@
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/flare.html",
|
||||
"destination": "/docs/modules/chains/additional/flare"
|
||||
"destination": "/docs/use_cases/question_answering/how_to/flare"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/graph_cypher_qa.html",
|
||||
"destination": "/docs/modules/chains/additional/graph_cypher_qa"
|
||||
"destination": "/docs/use_cases/graph/graph_cypher_qa"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/graph_nebula_qa.html",
|
||||
"destination": "/docs/modules/chains/additional/graph_nebula_qa"
|
||||
"destination": "/docs/use_cases/graph/graph_nebula_qa"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/index_examples/graph_qa.html",
|
||||
"destination": "/docs/modules/chains/additional/graph_qa"
|
||||
"destination": "/docs/use_cases/graph/graph_qa"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/index_examples/hyde.html",
|
||||
"destination": "/docs/modules/chains/additional/hyde"
|
||||
"destination": "/docs/use_cases/question_answering/how_to/hyde"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_bash.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_bash"
|
||||
"destination": "/docs/use_cases/code_writing/llm_bash"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_checker.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_checker"
|
||||
"destination": "/docs/use_cases/self_check/llm_checker"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_math.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_math"
|
||||
"destination": "/docs/use_cases/code_writing/llm_math"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_requests.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_requests"
|
||||
"destination": "/docs/use_cases/apis/llm_requests"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/llm_summarization_checker.html",
|
||||
"destination": "/docs/modules/chains/additional/llm_summarization_checker"
|
||||
"destination": "/docs/use_cases/self_check/llm_summarization_checker"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/openapi.html",
|
||||
"destination": "/docs/modules/chains/additional/openapi"
|
||||
"destination": "/docs/use_cases/apis/openapi"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/pal.html",
|
||||
"destination": "/docs/modules/chains/additional/pal"
|
||||
"destination": "/docs/use_cases/code_writing/pal"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/examples/tagging.html",
|
||||
"destination": "/docs/modules/chains/additional/tagging"
|
||||
"destination": "/docs/use_cases/tagging"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/index_examples/vector_db_text_generation.html",
|
||||
"destination": "/docs/modules/chains/additional/vector_db_text_generation"
|
||||
"destination": "/docs/use_cases/question_answering/how_to/vector_db_text_generation"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/chains/generic/router.html",
|
||||
@@ -3448,6 +3448,10 @@
|
||||
"source": "/docs/modules/model_io/models/llms/integrations/writer",
|
||||
"destination": "/docs/integrations/llms/writer"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts.html",
|
||||
"destination": "/docs/modules/model_io/prompts"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/output_parsers.html",
|
||||
"destination": "/docs/modules/model_io/output_parsers/"
|
||||
@@ -3472,6 +3476,10 @@
|
||||
"source": "/en/latest/modules/prompts/output_parsers/examples/retry.html",
|
||||
"destination": "/docs/modules/model_io/output_parsers/retry"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/example_selectors.html",
|
||||
"destination": "/docs/modules/model_io/prompts/example_selectors"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/example_selectors/examples/custom_example_selector.html",
|
||||
"destination": "/docs/modules/model_io/prompts/example_selectors/custom_example_selector"
|
||||
@@ -3484,6 +3492,10 @@
|
||||
"source": "/en/latest/modules/prompts/example_selectors/examples/ngram_overlap.html",
|
||||
"destination": "/docs/modules/model_io/prompts/example_selectors/ngram_overlap"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/prompt_templates.html",
|
||||
"destination": "/docs/modules/model_io/prompts/prompt_templates"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/prompts/prompt_templates/examples/connecting_to_a_feature_store.html",
|
||||
"destination": "/docs/modules/model_io/prompts/prompt_templates/connecting_to_a_feature_store"
|
||||
@@ -3736,6 +3748,10 @@
|
||||
"source": "/docs/modules/evaluation/:path*(/?)",
|
||||
"destination": "/docs/guides/evaluation/:path*"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/indexes.html",
|
||||
"destination": "/docs/modules/data_connection"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/indexes/:path*",
|
||||
"destination": "/docs/modules/data_connection/:path*"
|
||||
@@ -3771,6 +3787,174 @@
|
||||
{
|
||||
"source": "/en/latest/:path*",
|
||||
"destination": "/docs/:path*"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/constitutional_chain",
|
||||
"destination": "/docs/guides/safety/constitutional_chain"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/moderation",
|
||||
"destination": "/docs/guides/safety/moderation"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/api",
|
||||
"destination": "/docs/use_cases/apis/api"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/analyze_document",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/analyze_document"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/chat_vector_db",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/chat_vector_db"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/multi_retrieval_qa_router",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/multi_retrieval_qa_router"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/question_answering",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/question_answering"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/vector_db_qa",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/vector_db_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/summarize",
|
||||
"destination": "/docs/use_cases/summarization/summarize"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/sqlite",
|
||||
"destination": "/docs/use_cases/tabular/sqlite"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/popular/openai_functions",
|
||||
"destination": "/docs/modules/chains/how_to/openai_functions"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_requests",
|
||||
"destination": "/docs/use_cases/apis/llm_requests"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/openai_openapi",
|
||||
"destination": "/docs/use_cases/apis/openai_openapi"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/openapi",
|
||||
"destination": "/docs/use_cases/apis/openapi"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/openapi_openai",
|
||||
"destination": "/docs/use_cases/apis/openapi_openai"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/cpal",
|
||||
"destination": "/docs/use_cases/code_writing/cpal"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_bash",
|
||||
"destination": "/docs/use_cases/code_writing/llm_bash"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_math",
|
||||
"destination": "/docs/use_cases/code_writing/llm_math"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_symbolic_math",
|
||||
"destination": "/docs/use_cases/code_writing/llm_symbolic_math"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/pal",
|
||||
"destination": "/docs/use_cases/code_writing/pal"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_arangodb_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_arangodb_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_cypher_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_cypher_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_hugegraph_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_hugegraph_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_kuzu_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_kuzu_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_nebula_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_nebula_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/graph_sparql_qa",
|
||||
"destination": "/docs/use_cases/graph/graph_sparql_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/neptune_cypher_qa",
|
||||
"destination": "/docs/use_cases/graph/neptune_cypher_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/tot",
|
||||
"destination": "/docs/use_cases/graph/tot"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/question_answering//document-context-aware-QA",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/document-context-aware-QA"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/flare",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/flare"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/hyde",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/hyde"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/question_answering//local_retrieval_qa",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/local_retrieval_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/qa_citations",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/qa_citations"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/vector_db_text_generation",
|
||||
"destination": "/docs/use_cases/question_answering/how_to/vector_db_text_generation"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/openai_functions_retrieval_qa",
|
||||
"destination": "/docs/use_cases/question_answering/integrations/openai_functions_retrieval_qa"
|
||||
},
|
||||
{
|
||||
"source": "/docs/use_cases/question_answering//semantic-search-over-chat",
|
||||
"destination": "/docs/use_cases/question_answering/integrations/semantic-search-over-chat"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_checker",
|
||||
"destination": "/docs/use_cases/self_check/llm_checker"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/llm_summarization_checker",
|
||||
"destination": "/docs/use_cases/self_check/llm_summarization_checker"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/elasticsearch_database",
|
||||
"destination": "/docs/use_cases/tabular/elasticsearch_database"
|
||||
},
|
||||
{
|
||||
"source": "/docs/modules/chains/additional/tagging",
|
||||
"destination": "/docs/use_cases/tagging"
|
||||
},
|
||||
{
|
||||
"source": "docs/integrations/providers/agent_with_wandb_tracing",
|
||||
"destination": "docs/integrations/providers/wandb_tracing"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -1,26 +1,47 @@
|
||||
#!/bin/bash
|
||||
|
||||
version_compare() {
|
||||
local v1=(${1//./ })
|
||||
local v2=(${2//./ })
|
||||
for i in {0..2}; do
|
||||
if (( ${v1[i]} < ${v2[i]} )); then
|
||||
return 1
|
||||
fi
|
||||
done
|
||||
return 0
|
||||
}
|
||||
|
||||
openssl_version=$(openssl version | awk '{print $2}')
|
||||
required_openssl_version="1.1.1"
|
||||
|
||||
python_version=$(python3 --version 2>&1 | awk '{print $2}')
|
||||
required_python_version="3.10"
|
||||
|
||||
echo "OpenSSL Version"
|
||||
echo $openssl_version
|
||||
echo "Python Version"
|
||||
echo $python_version
|
||||
# If openssl version is less than 1.1.1 AND python version is less than 3.10
|
||||
if ! version_compare $openssl_version $required_openssl_version && ! version_compare $python_version $required_python_version; then
|
||||
### See: https://github.com/urllib3/urllib3/issues/2168
|
||||
# Requests lib breaks for old SSL versions,
|
||||
# which are defaults on Amazon Linux 2 (which Vercel uses for builds)
|
||||
yum -y update
|
||||
yum remove openssl-devel -y
|
||||
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar -y
|
||||
yum install openssl11 -y
|
||||
yum install openssl11-devel -y
|
||||
# Install python 3.11 to connect with openSSL 1.1.1
|
||||
wget https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz
|
||||
tar xzf Python-3.11.4.tgz
|
||||
cd Python-3.11.4
|
||||
./configure
|
||||
make altinstall
|
||||
# Check python version
|
||||
echo "Python Version"
|
||||
python3.11 --version
|
||||
cd ..
|
||||
###
|
||||
yum -y update
|
||||
yum remove openssl-devel -y
|
||||
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar -y
|
||||
yum install openssl11 -y
|
||||
yum install openssl11-devel -y
|
||||
|
||||
wget https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz
|
||||
tar xzf Python-3.11.4.tgz
|
||||
cd Python-3.11.4
|
||||
./configure
|
||||
make altinstall
|
||||
echo "Python Version"
|
||||
python3.11 --version
|
||||
cd ..
|
||||
fi
|
||||
|
||||
# Install nbdev and generate docs
|
||||
cd ..
|
||||
python3.11 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
# Tutorials
|
||||
|
||||
Below are links to video tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).
|
||||
|
||||
⛓ icon marks a new addition [last update 2023-07-05]
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ If you're building with LLMs, at some point something will break, and you'll nee
|
||||
|
||||
Here's a few different tools and functionalities to aid in debugging.
|
||||
|
||||
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
|
||||
|
||||
|
||||
## Tracing
|
||||
|
||||
|
||||
1424
docs/extras/guides/expression_language/cookbook.ipynb
Normal file
1424
docs/extras/guides/expression_language/cookbook.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
282
docs/extras/guides/expression_language/interface.ipynb
Normal file
282
docs/extras/guides/expression_language/interface.ipynb
Normal file
@@ -0,0 +1,282 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "9a9acd2e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Interface\n",
|
||||
"\n",
|
||||
"In an effort to make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/latest/schema/langchain.schema.runnable.Runnable.html#langchain.schema.runnable.Runnable) protocol that most components implement. This is a standard interface with a few different methods, which makes it easy to define custom chains as well as making it possible to invoke them in a standard way. The standard interface exposed includes:\n",
|
||||
"\n",
|
||||
"- `stream`: stream back chunks of the response\n",
|
||||
"- `invoke`: call the chain on an input\n",
|
||||
"- `batch`: call the chain on a list of inputs\n",
|
||||
"\n",
|
||||
"These also have corresponding async methods:\n",
|
||||
"\n",
|
||||
"- `astream`: stream back chunks of the response async\n",
|
||||
"- `ainvoke`: call the chain on an input async\n",
|
||||
"- `abatch`: call the chain on a list of inputs async\n",
|
||||
"\n",
|
||||
"The type of the input varies by component:\n",
|
||||
"\n",
|
||||
"| Component | Input Type |\n",
|
||||
"| --- | --- |\n",
|
||||
"|Prompt|Dictionary|\n",
|
||||
"|Retriever|Single string|\n",
|
||||
"|Model| Single string, list of chat messages or a PromptValue|\n",
|
||||
"\n",
|
||||
"The output type also varies by component:\n",
|
||||
"\n",
|
||||
"| Component | Output Type |\n",
|
||||
"| --- | --- |\n",
|
||||
"| LLM | String |\n",
|
||||
"| ChatModel | ChatMessage |\n",
|
||||
"| Prompt | PromptValue |\n",
|
||||
"| Retriever | List of documents |\n",
|
||||
"\n",
|
||||
"Let's take a look at these methods! To do so, we'll create a super simple PromptTemplate + ChatModel chain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "466b65b3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import ChatPromptTemplate\n",
|
||||
"from langchain.chat_models import ChatOpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "3c634ef0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = ChatOpenAI()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "d1850a1f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = ChatPromptTemplate.from_template(\"tell me a joke about {topic}\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "56d0669f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = prompt | model"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "daf2b2b2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Stream"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "bea9639d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Sure, here's a bear-themed joke for you:\n",
|
||||
"\n",
|
||||
"Why don't bears wear shoes?\n",
|
||||
"\n",
|
||||
"Because they have bear feet!"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"for s in chain.stream({\"topic\": \"bears\"}):\n",
|
||||
" print(s.content, end=\"\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cbf1c782",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Invoke"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "470e483f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they already have bear feet!\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "88f0c279",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Batch"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "9685de67",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[AIMessage(content=\"Why don't bears ever wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False),\n",
|
||||
" AIMessage(content=\"Why don't cats play poker in the wild?\\n\\nToo many cheetahs!\", additional_kwargs={}, example=False)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"cats\"}])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b960cbfe",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Async Stream"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "ea35eee4",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Why don't bears wear shoes?\n",
|
||||
"\n",
|
||||
"Because they have bear feet!"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"async for s in chain.astream({\"topic\": \"bears\"}):\n",
|
||||
" print(s.content, end=\"\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "04cb3324",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Async Invoke"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "ef8c9b20",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=\"Sure, here you go:\\n\\nWhy don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chain.ainvoke({\"topic\": \"bears\"})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3da288d5",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Async Batch"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "eba2a103",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[AIMessage(content=\"Why don't bears wear shoes?\\n\\nBecause they have bear feet!\", additional_kwargs={}, example=False)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"await chain.abatch([{\"topic\": \"bears\"}])"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "920a3c1a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Model Comparison\n",
|
||||
"# Model comparison\n",
|
||||
"\n",
|
||||
"Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way. \n",
|
||||
"\n",
|
||||
@@ -254,7 +254,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
"version": "3.11.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
|
||||
"> from data labeling to model monitoring.\n",
|
||||
"\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/modules/callbacks/integrations/argilla.html\">\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/integrations/callbacks/argilla.html\">\n",
|
||||
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
|
||||
"</a>"
|
||||
]
|
||||
|
||||
287
docs/extras/integrations/chat/anthropic_functions.ipynb
Normal file
287
docs/extras/integrations/chat/anthropic_functions.ipynb
Normal file
@@ -0,0 +1,287 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5125a1e3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Anthropic Functions\n",
|
||||
"\n",
|
||||
"This notebook shows how to use an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "378be79b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/Users/harrisonchase/.pyenv/versions/3.9.1/envs/langchain/lib/python3.9/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.6.14) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.\n",
|
||||
" warnings.warn(\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_experimental.llms.anthropic_functions import AnthropicFunctions"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "65499965",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Initialize Model\n",
|
||||
"\n",
|
||||
"You can initialize this wrapper the same way you'd initialize ChatAnthropic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "e1d535f6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"model = AnthropicFunctions(model='claude-2')"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fcc9eaf4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Passing in functions\n",
|
||||
"\n",
|
||||
"You can now pass in functions in a similar way"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "0779c320",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"functions=[\n",
|
||||
" {\n",
|
||||
" \"name\": \"get_current_weather\",\n",
|
||||
" \"description\": \"Get the current weather in a given location\",\n",
|
||||
" \"parameters\": {\n",
|
||||
" \"type\": \"object\",\n",
|
||||
" \"properties\": {\n",
|
||||
" \"location\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"description\": \"The city and state, e.g. San Francisco, CA\"\n",
|
||||
" },\n",
|
||||
" \"unit\": {\n",
|
||||
" \"type\": \"string\",\n",
|
||||
" \"enum\": [\"celsius\", \"fahrenheit\"]\n",
|
||||
" }\n",
|
||||
" },\n",
|
||||
" \"required\": [\"location\"]\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" ]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "ad75a933",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import HumanMessage"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "fc703085",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"response = model.predict_messages(\n",
|
||||
" [HumanMessage(content=\"whats the weater in boston?\")], \n",
|
||||
" functions=functions\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "04d7936a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' ', additional_kwargs={'function_call': {'name': 'get_current_weather', 'arguments': '{\"location\": \"Boston, MA\", \"unit\": \"fahrenheit\"}'}}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"response"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0072fdba",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using for extraction\n",
|
||||
"\n",
|
||||
"You can now use this for extraction."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "7af5c567",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import create_extraction_chain\n",
|
||||
"schema = {\n",
|
||||
" \"properties\": {\n",
|
||||
" \"name\": {\"type\": \"string\"},\n",
|
||||
" \"height\": {\"type\": \"integer\"},\n",
|
||||
" \"hair_color\": {\"type\": \"string\"},\n",
|
||||
" },\n",
|
||||
" \"required\": [\"name\", \"height\"],\n",
|
||||
"}\n",
|
||||
"inp = \"\"\"\n",
|
||||
"Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.\n",
|
||||
" \"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "bd01082a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = create_extraction_chain(schema, model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "b5a23e9f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[{'name': 'Alex', 'height': '5', 'hair_color': 'blonde'},\n",
|
||||
" {'name': 'Claudia', 'height': '6', 'hair_color': 'brunette'}]"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.run(inp)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "90ec959e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using for tagging\n",
|
||||
"\n",
|
||||
"You can now use this for tagging"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "03c1eb0d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import create_tagging_chain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "581c0ece",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"schema = {\n",
|
||||
" \"properties\": {\n",
|
||||
" \"sentiment\": {\"type\": \"string\"},\n",
|
||||
" \"aggressiveness\": {\"type\": \"integer\"},\n",
|
||||
" \"language\": {\"type\": \"string\"},\n",
|
||||
" }\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "d9a8570e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = create_tagging_chain(schema, model)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "cf37d679",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'sentiment': 'positive', 'aggressiveness': '0', 'language': 'english'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 15,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.run(\"this is really cool\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
95
docs/extras/integrations/chat/azureml_chat_endpoint.ipynb
Normal file
95
docs/extras/integrations/chat/azureml_chat_endpoint.ipynb
Normal file
@@ -0,0 +1,95 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# AzureML Chat Online Endpoint\n",
|
||||
"\n",
|
||||
"[AzureML](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use a chat model hosted on an `AzureML online endpoint`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models.azureml_endpoint import AzureMLChatOnlineEndpoint"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up\n",
|
||||
"\n",
|
||||
"To use the wrapper, you must [deploy a model on AzureML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) and obtain the following parameters:\n",
|
||||
"\n",
|
||||
"* `endpoint_api_key`: The API key provided by the endpoint\n",
|
||||
"* `endpoint_url`: The REST endpoint url provided by the endpoint"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Content Formatter\n",
|
||||
"\n",
|
||||
"The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. The following content formatters are provided:\n",
|
||||
"\n",
|
||||
"* `LLamaContentFormatter`: Formats request and response data for LLaMa2-chat"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"AIMessage(content=' The Collatz Conjecture is one of the most famous unsolved problems in mathematics, and it has been the subject of much study and research for many years. While it is impossible to predict with certainty whether the conjecture will ever be solved, there are several reasons why it is considered a challenging and important problem:\\n\\n1. Simple yet elusive: The Collatz Conjecture is a deceptively simple statement that has proven to be extraordinarily difficult to prove or disprove. Despite its simplicity, the conjecture has eluded some of the brightest minds in mathematics, and it remains one of the most famous open problems in the field.\\n2. Wide-ranging implications: The Collatz Conjecture has far-reaching implications for many areas of mathematics, including number theory, algebra, and analysis. A solution to the conjecture could have significant impacts on these fields and potentially lead to new insights and discoveries.\\n3. Computational evidence: While the conjecture remains unproven, extensive computational evidence supports its validity. In fact, no counterexample to the conjecture has been found for any starting value up to 2^64 (a number', additional_kwargs={}, example=False)"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chat_models.azureml_endpoint import LlamaContentFormatter\n",
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"\n",
|
||||
"chat = AzureMLChatOnlineEndpoint(content_formatter=LlamaContentFormatter())\n",
|
||||
"response = chat(messages=[\n",
|
||||
" HumanMessage(content=\"Will the Collatz conjecture ever be solved?\")\n",
|
||||
"])\n",
|
||||
"response"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
@@ -8,11 +9,7 @@
|
||||
"\n",
|
||||
"Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
||||
"\n",
|
||||
"PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the [GCP Service Specific Terms](https://cloud.google.com/terms/service-terms). \n",
|
||||
"\n",
|
||||
"Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the [launch stage descriptions](https://cloud.google.com/products#product-launch-stages). Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview [terms and conditions](https://cloud.google.com/trustedtester/aitos) (Preview Terms).\n",
|
||||
"\n",
|
||||
"For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms).\n",
|
||||
"By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n",
|
||||
"\n",
|
||||
"To use Vertex AI PaLM you must have the `google-cloud-aiplatform` Python package installed and either:\n",
|
||||
"- Have credentials configured for your environment (gcloud, workload identity, etc...)\n",
|
||||
@@ -90,6 +87,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
@@ -142,6 +140,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {
|
||||
"execution": {
|
||||
|
||||
94
docs/extras/integrations/document_loaders/concurrent.ipynb
Normal file
94
docs/extras/integrations/document_loaders/concurrent.ipynb
Normal file
@@ -0,0 +1,94 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "23c6e167",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Concurrent Loader\n",
|
||||
"\n",
|
||||
"Works just like the GenericLoader but concurrently for those who choose to optimize their workflow.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "6ff3fb1f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import ConcurrentLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "ce96fa20",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = ConcurrentLoader.from_filesystem('example_data/', glob=\"**/*.txt\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "06a6cf5d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"files = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "b87d3e58",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"2"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"len(files)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "668f1ee5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
This notebook covers how to load data from an .ipynb notebook into a format suitable by LangChain.
|
||||
|
||||
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. -->
|
||||
|
||||
|
||||
|
||||
```python
|
||||
|
||||
@@ -0,0 +1,13 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
|
||||
<opml version="1.0">
|
||||
<head>
|
||||
<title>Sample RSS feed subscriptions</title>
|
||||
</head>
|
||||
<body>
|
||||
<outline text="Tech" title="Tech">
|
||||
<outline type="rss" text="Engadget" title="Engadget" xmlUrl="http://www.engadget.com/rss-full.xml" htmlUrl="http://www.engadget.com"/>
|
||||
<outline type="rss" text="Ars Technica - All content" title="Ars Technica - All content" xmlUrl="http://feeds.arstechnica.com/arstechnica/index/" htmlUrl="https://arstechnica.com"/>
|
||||
</outline>
|
||||
</body>
|
||||
</opml>
|
||||
@@ -0,0 +1,178 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c83b6a4c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Huawei OBS Directory\n",
|
||||
"The following code demonstrates how to load objects from the Huawei OBS (Object Storage Service) as documents."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c2191935",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Install the required package\n",
|
||||
"# pip install esdk-obs-python"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "55fca3b4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import OBSDirectoryLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "c3ed419f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"endpoint = \"your-endpoint\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "3428fd4e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Configure your access credentials\\n\n",
|
||||
"config = {\n",
|
||||
" \"ak\": \"your-access-key\",\n",
|
||||
" \"sk\": \"your-secret-key\"\n",
|
||||
"}\n",
|
||||
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint, config=config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9beede9f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1e20a839",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Specify a Prefix for Loading\n",
|
||||
"If you want to load objects with a specific prefix from the bucket, you can use the following code:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "125f311d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint, config=config, prefix=\"test_prefix\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b3488037",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "84c82c0a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get Authentication Information from ECS\n",
|
||||
"If your langchain is deployed on Huawei Cloud ECS and [Agency is set up](https://support.huaweicloud.com/intl/en-us/usermanual-ecs/ecs_03_0166.html#section7), the loader can directly get the security token from ECS without needing access key and secret key. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "1db99969",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config = {\"get_token_from_ecs\": True}\n",
|
||||
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint, config=config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "57dd9f35",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "30205d25",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use a Public Bucket\n",
|
||||
"If your bucket's bucket policy allows anonymous access (anonymous users have `listBucket` and `GetObject` permissions), you can directly load the objects without configuring the `config` parameter."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "4dfa2ef0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = OBSDirectoryLoader(\"your-bucket-name\", endpoint=endpoint)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "67d4c1d0",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
180
docs/extras/integrations/document_loaders/huawei_obs_file.ipynb
Normal file
180
docs/extras/integrations/document_loaders/huawei_obs_file.ipynb
Normal file
@@ -0,0 +1,180 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4394a872",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Huawei OBS File\n",
|
||||
"The following code demonstrates how to load an object from the Huawei OBS (Object Storage Service) as document."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "c43d811b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Install the required package\n",
|
||||
"# pip install esdk-obs-python"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "5e16bae6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders.obs_file import OBSFileLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "75cc7e7c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"endpoint = \"your-endpoint\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "f9816984",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from obs import ObsClient\n",
|
||||
"obs_client = ObsClient(access_key_id=\"your-access-key\", secret_access_key=\"your-secret-key\", server=endpoint)\n",
|
||||
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\", client=obs_client)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "6143b39b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "633e05ca",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Each Loader with Separate Authentication Information\n",
|
||||
"If you don't need to reuse OBS connections between different loaders, you can directly configure the `config`. The loader will use the config information to initialize its own OBS client."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "a5dd6a5d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Configure your access credentials\\n\n",
|
||||
"config = {\n",
|
||||
" \"ak\": \"your-access-key\",\n",
|
||||
" \"sk\": \"your-secret-key\"\n",
|
||||
"}\n",
|
||||
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\",endpoint=endpoint, config=config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9a741f1c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1e2e611c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get Authentication Information from ECS\n",
|
||||
"If your langchain is deployed on Huawei Cloud ECS and [Agency is set up](https://support.huaweicloud.com/intl/en-us/usermanual-ecs/ecs_03_0166.html#section7), the loader can directly get the security token from ECS without needing access key and secret key. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "338fafef",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"config = {\"get_token_from_ecs\": True}\n",
|
||||
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\", endpoint=endpoint, config=config)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "73976c55",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b77aa18c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Access a Publicly Accessible Object\n",
|
||||
"If the object you want to access allows anonymous user access (anonymous users have `GetObject` permission), you can directly load the object without configuring the `config` parameter."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "df83d121",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = OBSFileLoader(\"your-bucket-name\", \"your-object-key\", endpoint=endpoint)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "82a844ba",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -57,7 +57,13 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"loader = MWDumpLoader(\"example_data/testmw_pages_current.xml\", encoding=\"utf8\")\n",
|
||||
"loader = MWDumpLoader(\n",
|
||||
" file_path = \"example_data/testmw_pages_current.xml\", \n",
|
||||
" encoding=\"utf8\",\n",
|
||||
" #namespaces = [0,2,3] Optional list to load only specific namespaces. Loads all namespaces by default.\n",
|
||||
" skip_redirects = True, #will skip over pages that just redirect to other pages (or not if False)\n",
|
||||
" stop_on_error = False #will skip over pages that cause parsing errors (or not if False)\n",
|
||||
" )\n",
|
||||
"documents = loader.load()\n",
|
||||
"print(f\"You have {len(documents)} document(s) in your data \")"
|
||||
]
|
||||
|
||||
192
docs/extras/integrations/document_loaders/news.ipynb
Normal file
192
docs/extras/integrations/document_loaders/news.ipynb
Normal file
@@ -0,0 +1,192 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2dfc4698",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# News URL\n",
|
||||
"\n",
|
||||
"This covers how to load HTML news articles from a list of URLs into a document format that we can use downstream."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "16c3699e",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-02T21:18:18.886031400Z",
|
||||
"start_time": "2023-08-02T21:18:17.682345Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import NewsURLLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "836fbac1",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-02T21:18:18.895539800Z",
|
||||
"start_time": "2023-08-02T21:18:18.895539800Z"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"urls = [\n",
|
||||
" \"https://www.bbc.com/news/world-us-canada-66388172\",\n",
|
||||
" \"https://www.bbc.com/news/entertainment-arts-66384971\",\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "33089aba-ff74-4d00-8f40-9449c29587cc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Pass in urls to load them into Documents"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "00f46fda",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-02T21:18:19.227074500Z",
|
||||
"start_time": "2023-08-02T21:18:18.895539800Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that \"no reasonable person\" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None}\n",
|
||||
"\n",
|
||||
"Second article: page_content='Ms Williams added: \"If there\\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\\'t have to go through that same experience, I\\'m going to do that.\"' metadata={'title': \"Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'\", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"loader = NewsURLLoader(urls=urls)\n",
|
||||
"data = loader.load()\n",
|
||||
"print(\"First article: \", data[0])\n",
|
||||
"print(\"\\nSecond article: \", data[1])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"source": [
|
||||
"Use nlp=True to run nlp analysis and generate keywords + summary"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
},
|
||||
"id": "98ac26c488315bff"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "b68a26b3",
|
||||
"metadata": {
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-02T21:18:19.585758200Z",
|
||||
"start_time": "2023-08-02T21:18:19.227074500Z"
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"First article: page_content='In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that \"no reasonable person\" would view her claims as fact. Neither she nor her representatives have commented.' metadata={'title': 'Donald Trump indictment: What do we know about the six co-conspirators?', 'link': 'https://www.bbc.com/news/world-us-canada-66388172', 'authors': [], 'language': 'en', 'description': 'Six people accused of helping Mr Trump undermine the election have been described by prosecutors.', 'publish_date': None, 'keywords': ['powell', 'know', 'donald', 'trump', 'review', 'indictment', 'telling', 'view', 'reasonable', 'person', 'testimony', 'coconspirators', 'riot', 'representatives', 'claims'], 'summary': 'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that \"no reasonable person\" would view her claims as fact.\\nNeither she nor her representatives have commented.'}\n",
|
||||
"\n",
|
||||
"Second article: page_content='Ms Williams added: \"If there\\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\\'t have to go through that same experience, I\\'m going to do that.\"' metadata={'title': \"Lizzo dancers Arianna Davis and Crystal Williams: 'No one speaks out, they are scared'\", 'link': 'https://www.bbc.com/news/entertainment-arts-66384971', 'authors': [], 'language': 'en', 'description': 'The US pop star is being sued for sexual harassment and fat-shaming but has yet to comment.', 'publish_date': None, 'keywords': ['davis', 'lizzo', 'singers', 'experience', 'crystal', 'ensure', 'arianna', 'theres', 'williams', 'power', 'going', 'dancers', 'im', 'speaks', 'work', 'ms', 'scared'], 'summary': 'Ms Williams added: \"If there\\'s anything that I can do in my power to ensure that dancers or singers or whoever decides to work with her don\\'t have to go through that same experience, I\\'m going to do that.\"'}\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"loader = NewsURLLoader(urls=urls, nlp=True)\n",
|
||||
"data = loader.load()\n",
|
||||
"print(\"First article: \", data[0])\n",
|
||||
"print(\"\\nSecond article: \", data[1])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "['powell',\n 'know',\n 'donald',\n 'trump',\n 'review',\n 'indictment',\n 'telling',\n 'view',\n 'reasonable',\n 'person',\n 'testimony',\n 'coconspirators',\n 'riot',\n 'representatives',\n 'claims']"
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"data[0].metadata['keywords']"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-02T21:18:19.585758200Z",
|
||||
"start_time": "2023-08-02T21:18:19.585758200Z"
|
||||
}
|
||||
},
|
||||
"id": "ae37e004e0284b1d"
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": "'In testimony to the congressional committee examining the 6 January riot, Mrs Powell said she did not review all of the many claims of election fraud she made, telling them that \"no reasonable person\" would view her claims as fact.\\nNeither she nor her representatives have commented.'"
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"data[0].metadata['summary']"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"ExecuteTime": {
|
||||
"end_time": "2023-08-02T21:18:19.598966800Z",
|
||||
"start_time": "2023-08-02T21:18:19.594950200Z"
|
||||
}
|
||||
},
|
||||
"id": "7676155fb175e53e"
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
311
docs/extras/integrations/document_loaders/rss.ipynb
Normal file
311
docs/extras/integrations/document_loaders/rss.ipynb
Normal file
@@ -0,0 +1,311 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2dfc4698",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# RSS Feeds\n",
|
||||
"\n",
|
||||
"This covers how to load HTML news articles from a list of RSS feed URLs into a document format that we can use downstream."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e7c2cd52-c1f7-4a06-8539-b0117da91fba",
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install feedparser newspaper3k listparser"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 32,
|
||||
"id": "16c3699e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import RSSFeedLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 33,
|
||||
"id": "836fbac1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"urls = [\"https://news.ycombinator.com/rss\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "33089aba-ff74-4d00-8f40-9449c29587cc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Pass in urls to load them into Documents"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "00f46fda",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = RSSFeedLoader(urls=urls)\n",
|
||||
"data = loader.load()\n",
|
||||
"print(len(data))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 35,
|
||||
"id": "b447468cc42266d0",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"(next Rich)\n",
|
||||
"\n",
|
||||
"04 August 2023\n",
|
||||
"\n",
|
||||
"Rich Hickey\n",
|
||||
"\n",
|
||||
"It is with a mixture of heartache and optimism that I announce today my (long planned) retirement from commercial software development, and my employment at Nubank. It’s been thrilling to see Clojure and Datomic successfully applied at scale.\n",
|
||||
"\n",
|
||||
"I look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again. We have many useful things planned for 1.12 and beyond. The community remains friendly, mature and productive, and is taking Clojure into many interesting new domains.\n",
|
||||
"\n",
|
||||
"I want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large.\n",
|
||||
"\n",
|
||||
"Stu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives. I’m particularly excited to see where the new free availability of Datomic will lead.\n",
|
||||
"\n",
|
||||
"My time with Cognitect remains the highlight of my career. I have learned from absolutely everyone on our team, and am forever grateful to all for our interactions. There are too many people to thank here, but I must extend my sincerest appreciation and love to Stu and Justin for (repeatedly) taking a risk on me and my ideas, and for being the best of partners and friends, at all times fully embodying the notion of integrity. And of course to Alex Miller - who possesses in abundance many skills I lack, and without whose indomitable spirit, positivity and friendship Clojure would not have become what it did.\n",
|
||||
"\n",
|
||||
"I have made many friends through Clojure and Cognitect, and I hope to nurture those friendships moving forward.\n",
|
||||
"\n",
|
||||
"Retirement returns me to the freedom and independence I had when originally developing Clojure. The journey continues!\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"print(data[0].page_content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c36d3b0d329faf2a",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"You can pass arguments to the NewsURLLoader which it uses to load articles."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 36,
|
||||
"id": "5fdada62470d3019",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Error fetching or processing https://twitter.com/andrewmccalip/status/1687405505604734978, exception: You must `parse()` an article first!\n",
|
||||
"Error processing entry https://twitter.com/andrewmccalip/status/1687405505604734978, exception: list index out of range\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"13\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"loader = RSSFeedLoader(urls=urls, nlp=True)\n",
|
||||
"data = loader.load()\n",
|
||||
"print(len(data))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 37,
|
||||
"id": "11d71963f7735c1d",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['nubank',\n",
|
||||
" 'alex',\n",
|
||||
" 'stu',\n",
|
||||
" 'taking',\n",
|
||||
" 'team',\n",
|
||||
" 'remains',\n",
|
||||
" 'rich',\n",
|
||||
" 'clojure',\n",
|
||||
" 'thank',\n",
|
||||
" 'planned',\n",
|
||||
" 'datomic']"
|
||||
]
|
||||
},
|
||||
"execution_count": 37,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"data[0].metadata['keywords']"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 38,
|
||||
"id": "9fb64ba0e8780966",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'It’s been thrilling to see Clojure and Datomic successfully applied at scale.\\nI look forward to continuing to lead ongoing work maintaining and enhancing Clojure with Alex, Stu, Fogus and many others, as an independent developer once again.\\nThe community remains friendly, mature and productive, and is taking Clojure into many interesting new domains.\\nI want to highlight and thank Nubank for their ongoing sponsorship of Alex, Fogus and the core team, as well as the Clojure community at large.\\nStu will continue to lead the development of Datomic at Nubank, where the Datomic team grows and thrives.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 38,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"data[0].metadata['summary']"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98ac26c488315bff",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"source": [
|
||||
"You can also use an OPML file such as a Feedly export. Pass in either a URL or the OPML contents."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 39,
|
||||
"id": "8b6f07ae526a897c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Error fetching http://www.engadget.com/rss-full.xml, exception: Error fetching http://www.engadget.com/rss-full.xml, exception: document declared as us-ascii, but parsed as utf-8\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"20\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"with open(\"example_data/sample_rss_feeds.opml\", \"r\") as f:\n",
|
||||
" loader = RSSFeedLoader(opml=f.read())\n",
|
||||
"data = loader.load()\n",
|
||||
"print(len(data))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 40,
|
||||
"id": "b68a26b3",
|
||||
"metadata": {
|
||||
"collapsed": false,
|
||||
"jupyter": {
|
||||
"outputs_hidden": false
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The electric vehicle startup Fisker made a splash in Huntington Beach last night, showing off a range of new EVs it plans to build alongside the Fisker Ocean, which is slowly beginning deliveries in Europe and the US. With shades of Lotus circa 2010, it seems there\\'s something for most tastes, with a powerful four-door GT, a versatile pickup truck, and an affordable electric city car.\\n\\n\"We want the world to know that we have big plans and intend to move into several different segments, redefining each with our unique blend of design, innovation, and sustainability,\" said CEO Henrik Fisker.\\n\\nStarting with the cheapest, the Fisker PEAR—a cutesy acronym for \"Personal Electric Automotive Revolution\"—is said to use 35 percent fewer parts than other small EVs. Although it\\'s a smaller car, the PEAR seats six thanks to front and rear bench seats. Oh, and it has a frunk, which the company is calling the \"froot,\" something that will satisfy some British English speakers like Ars\\' friend and motoring journalist Jonny Smith.\\n\\nBut most exciting is the price—starting at $29,900 and scheduled for 2025. Fisker plans to contract with Foxconn to build the PEAR in Lordstown, Ohio, meaning it would be eligible for federal tax incentives.\\n\\nAdvertisement\\n\\nThe Fisker Alaska is the company\\'s pickup truck, built on a modified version of the platform used by the Ocean. It has an extendable cargo bed, which can be as little as 4.5 feet (1,371 mm) or as much as 9.2 feet (2,804 mm) long. Fisker claims it will be both the lightest EV pickup on sale and the most sustainable pickup truck in the world. Range will be an estimated 230–240 miles (370–386 km).\\n\\nThis, too, is slated for 2025, and also at a relatively affordable price, starting at $45,400. Fisker hopes to build this car in North America as well, although it isn\\'t saying where that might take place.\\n\\nFinally, there\\'s the Ronin, a four-door GT that bears more than a passing resemblance to the Fisker Karma, Henrik Fisker\\'s 2012 creation. There\\'s no price for this one, but Fisker says its all-wheel drive powertrain will boast 1,000 hp (745 kW) and will hit 60 mph from a standing start in two seconds—just about as fast as modern tires will allow. Expect a massive battery in this one, as Fisker says it\\'s targeting a 600-mile (956 km) range.\\n\\n\"Innovation and sustainability, along with design, are our three brand values. By 2027, we intend to produce the world’s first climate-neutral vehicle, and as our customers reinvent their relationships with mobility, we want to be a leader in software-defined transportation,\" Fisker said.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 40,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"data[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d5a0cbe8-18a6-4af2-b447-7abb8b734451",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -113,7 +113,7 @@
|
||||
"\n",
|
||||
"The modules are (from least to most complex):\n",
|
||||
"\n",
|
||||
"- [Models](https://python.langchain.com/en/latest/modules/models.html): Supported model types and integrations.\n",
|
||||
"- [Models](https://python.langchain.com/docs/modules/model_io/models/): Supported model types and integrations.\n",
|
||||
"\n",
|
||||
"- [Prompts](https://python.langchain.com/en/latest/modules/prompts.html): Prompt management, optimization, and serialization.\n",
|
||||
"\n",
|
||||
|
||||
@@ -18,8 +18,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# # Install package\n",
|
||||
"!pip install \"unstructured[local-inference]\"\n",
|
||||
"!pip install layoutparser[layoutmodels,tesseract]"
|
||||
"!pip install \"unstructured[all-docs]\"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "e48afb8d",
|
||||
"metadata": {},
|
||||
@@ -11,7 +12,8 @@
|
||||
"\n",
|
||||
"Below we show how to easily go from a YouTube url to text to chat!\n",
|
||||
"\n",
|
||||
"We wil use the `OpenAIWhisperParser`, which will use the OpenAI Whisper API to transcribe audio to text.\n",
|
||||
"We wil use the `OpenAIWhisperParser`, which will use the OpenAI Whisper API to transcribe audio to text, \n",
|
||||
"and the `OpenAIWhisperParserLocal` for local support and running on private clouds or on premise.\n",
|
||||
"\n",
|
||||
"Note: You will need to have an `OPENAI_API_KEY` supplied."
|
||||
]
|
||||
@@ -24,7 +26,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders.generic import GenericLoader\n",
|
||||
"from langchain.document_loaders.parsers import OpenAIWhisperParser\n",
|
||||
"from langchain.document_loaders.parsers import OpenAIWhisperParser, OpenAIWhisperParserLocal\n",
|
||||
"from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader"
|
||||
]
|
||||
},
|
||||
@@ -46,7 +48,8 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install yt_dlp\n",
|
||||
"! pip install pydub"
|
||||
"! pip install pydub\n",
|
||||
"! pip install librosa"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -63,6 +66,18 @@
|
||||
"Let's take the first lecture of Andrej Karpathy's YouTube course as an example! "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8682f256",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# set a flag to switch between local and remote parsing\n",
|
||||
"# change this to True if you want to use local parsing\n",
|
||||
"local = False"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
@@ -102,7 +117,10 @@
|
||||
"save_dir = \"~/Downloads/YouTube\"\n",
|
||||
"\n",
|
||||
"# Transcribe the videos to text\n",
|
||||
"loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())\n",
|
||||
"if local:\n",
|
||||
" loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParserLocal())\n",
|
||||
"else:\n",
|
||||
" loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())\n",
|
||||
"docs = loader.load()"
|
||||
]
|
||||
},
|
||||
@@ -275,7 +293,7 @@
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"display_name": "Python 3",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -289,7 +307,12 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.10.11"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "97cc609b13305c559618ec78a438abc56230b9381f827f22d070313b9a1f3777"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
231
docs/extras/integrations/llms/Fireworks.ipynb
Normal file
231
docs/extras/integrations/llms/Fireworks.ipynb
Normal file
@@ -0,0 +1,231 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "cc6caafa",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Fireworks\n",
|
||||
"\n",
|
||||
">[Fireworks](https://www.fireworks.ai/) is an AI startup focused on accelerating product development on generative AI by creating an innovative AI experiment and production platform. \n",
|
||||
"\n",
|
||||
"This example goes over how to use LangChain to interact with `Fireworks` models."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"id": "60b6dbb2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms.fireworks import Fireworks, FireworksChat\n",
|
||||
"from langchain import PromptTemplate, LLMChain\n",
|
||||
"from langchain.prompts.chat import (\n",
|
||||
" ChatPromptTemplate,\n",
|
||||
" HumanMessagePromptTemplate,\n",
|
||||
")\n",
|
||||
"import os"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ccff689e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Setup\n",
|
||||
"\n",
|
||||
"Contact Fireworks AI for the an API Key to access our models\n",
|
||||
"\n",
|
||||
"Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-13b-chat."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "9ca87a2e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Initialize a Fireworks LLM\n",
|
||||
"os.environ['FIREWORKS_API_KEY'] = \"\" #change this to your own API KEY\n",
|
||||
"llm = Fireworks(model_id=\"fireworks-llama-v2-13b-chat\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "43a11ba8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Create LLM chain\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "acc24d0c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Calling the Model\n",
|
||||
"\n",
|
||||
"You can use the LLMs to call the model for specified prompt(s). \n",
|
||||
"\n",
|
||||
"Current Specified Models: \n",
|
||||
"* fireworks-falcon-7b, fireworks-falcon-40b-w8a16\n",
|
||||
"* fireworks-guanaco-30b, fireworks-guanaco-33b-w8a16\n",
|
||||
"* fireworks-llama-7b, fireworks-llama-13b, fireworks-llama-30b-w8a16\n",
|
||||
"* fireworks-llama-v2-13b, fireworks-llama-v2-13b-chat, fireworks-llama-v2-13b-w8a16, fireworks-llama-v2-13b-chat-w8a16\n",
|
||||
"* fireworks-llama-v2-7b, fireworks-llama-v2-7b-chat, fireworks-llama-v2-7b-w8a16, fireworks-llama-v2-7b-chat-w8a16"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "bf0a425c",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"It's a question that has been debated for years, with different analysts and fans making their cases for various signal-callers. Here are some of the top contenders for the title of best quarterback in the NFL:\n",
|
||||
"\n",
|
||||
"1. Tom Brady: The New England Patriots legend has won six Super Bowls and has been named Super Bowl MVP four times. He's known for his precision passing, pocket presence, and ability to read defenses.\n",
|
||||
"2. Aaron Rodgers: The Green Bay Packers quarterback has won two Super Bowls and has been named NFL MVP twice. He's known for his quick release, accuracy, and ability to extend plays with his feet.\n",
|
||||
"3. Drew Brees: The New Orleans Saints quarterback has won a Super Bowl and has been named NFL MVP once. He's known for his accuracy, pocket presence, and ability to read defenses.\n",
|
||||
"4. Patrick Mahomes: The Kansas City Chiefs quarterback has won a Super Bowl and has been named NFL MVP twice. He's known for his arm strength, athleticism, and ability to make plays outside of the pocket.\n",
|
||||
"5. Russell Wilson: The Seattle Seahawks quarterback has won a Super Bowl and has been named NFL MVP once. He's known for his mobility, accuracy, and ability to extend plays with his feet.\n",
|
||||
"\n",
|
||||
"Of course, there are other talented quarterbacks in the league, such as Lamar Jackson, Deshaun Watson, and Carson Wentz, who could also be considered among the best. Ultimately, the answer to the question of who's the best quarterback in the NFL is subjective and can vary depending on individual perspectives and criteria.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"#single prompt\n",
|
||||
"output = llm(\"Who's the best quarterback in the NFL?\")\n",
|
||||
"print(output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "afc7de6f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"generations=[[Generation(text=\"\\nWho is the best cricket player in the world in 2016?\\nThe best cricket player in the world in 2016 is Virat Kohli. The Indian captain has had a fabulous year, scoring heavily in all formats of the game, leading India to several victories, and breaking several records. In Test cricket, Kohli has scored 1215 runs at an average of 75.33 with 6 centuries and 4 fifties, which is the highest number of runs scored by any player in a calendar year. In ODI cricket, he has scored 1143 runs at an average of 83.42 with 7 centuries and 6 fifties, which is also the highest number of runs scored by any player in a calendar year. Additionally, Kohli has led India to the number one ranking in Test cricket, and has been named the ICC Test Player of the Year and the ICC ODI Player of the Year.\\nVirat Kohli has been in incredible form in 2016, and his performances have made him the standout player of the year. Other players who have had a great year include Steve Smith, Joe Root, and Kane Williamson, but Kohli's consistency and dominance in all formats of the game make him the best cricket player in the world in 2016.\", generation_info=None)], [Generation(text=\"\\n\\nA: LeBron James.\\n\\nB: Kevin Durant.\\n\\nC: Steph Curry.\\n\\nD: James Harden.\\n\\nE: Other (please specify).\\n\\nWhat's your answer?\", generation_info=None)]] llm_output={'token_usage': {}, 'model_id': 'fireworks-llama-v2-13b-chat'} run=[RunInfo(run_id=UUID('d14b6bee-7692-46ad-8798-acb6f72fc7fb')), RunInfo(run_id=UUID('b9f5b3b5-9e62-4eaf-b269-ecf0cbbcfb82'))]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"#calling multiple prompts\n",
|
||||
"output = llm.generate([\"Who's the best cricket player in 2016?\", \"Who's the best basketball player in the league?\"])\n",
|
||||
"print(output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "b801c20d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"Kansas City in December can be quite chilly, with average high\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"#setting parameters: model_id, temperature, max_tokens, top_p\n",
|
||||
"llm = Fireworks(model_id=\"fireworks-llama-v2-13b-chat\", temperature=0.7, max_tokens=15, top_p=1.0)\n",
|
||||
"print(llm(\"What's the weather like in Kansas City in December?\"))"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "137662a6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Create and Run Chain\n",
|
||||
"\n",
|
||||
"Create a prompt template to be used with the LLM Chain. Once this prompt template is created, initialize the chain with the LLM and prompt template, and run the chain with the specified prompts."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"id": "fd2c6bc1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"(Note: I'm just an AI and not a branding expert, so take this as a starting point for your own research and brainstorming.)\n",
|
||||
"A good name for a company that makes football helmets could be:\n",
|
||||
"\n",
|
||||
"1. Helix Pro: This name plays off the idea of a helix, or spiral, shape that is commonly associated with football helmets. \"Pro\" implies a professional-level product.\n",
|
||||
"2. Gridiron Gear: \"Gridiron\" is a term used to describe a football field, and \"gear\" highlights the company's focus on producing high-quality football helmets.\n",
|
||||
"3. Linebacker Lab: \"Linebacker\" is a position on the football field, and \"Lab\" suggests a focus on research and development.\n",
|
||||
"4. Helmet Hut: This name is simple and easy to remember, and it immediately conveys the company's focus on football helmets.\n",
|
||||
"5. Tackle Tech: \"Tackle\" is a term used in football to describe a hit or collision, and \"Tech\" implies a focus on advanced technology and innovation.\n",
|
||||
"6. Victory Vest: \"Victory\" implies a focus on winning and success, and \"Vest\" could suggest a protective or armored design.\n",
|
||||
"7. Pigskin Pro: \"Pigskin\" is a term used to describe a football, and \"Pro\" implies a professional-level product.\n",
|
||||
"8. Football Fusion: This name could suggest a combination of different materials or technologies to create a high-quality football helmet.\n",
|
||||
"9. Endzone Edge: \"Endzone\" is the area of the football field where a team scores a touchdown, and \"Edge\" implies a competitive advantage.\n",
|
||||
"10. MVP Masks: \"MVP\" stands for \"Most Valuable Player,\" and \"Masks\" highlights the protective nature of the company's football helmets.\n",
|
||||
"\n",
|
||||
"Remember, the name you choose for your company should be memorable, easy to pronounce and spell, and convey a sense of quality and professionalism. It's also important to check that the name isn't already in use by another company, and to consider any potential trademark issues.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"human_message_prompt = HumanMessagePromptTemplate(\n",
|
||||
" prompt=PromptTemplate(\n",
|
||||
" template=\"What is a good name for a company that makes {product}?\",\n",
|
||||
" input_variables=[\"product\"],\n",
|
||||
" )\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])\n",
|
||||
"chat = Fireworks()\n",
|
||||
"chain = LLMChain(llm=chat, prompt=chat_prompt_template)\n",
|
||||
"output = chain.run(\"football helmets\")\n",
|
||||
"\n",
|
||||
"print(output)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -28,9 +28,9 @@
|
||||
"\n",
|
||||
"To use the wrapper, you must [deploy a model on AzureML](https://learn.microsoft.com/en-us/azure/machine-learning/how-to-use-foundation-models?view=azureml-api-2#deploying-foundation-models-to-endpoints-for-inferencing) and obtain the following parameters:\n",
|
||||
"\n",
|
||||
"* `endpoint_api_key`: The API key provided by the endpoint\n",
|
||||
"* `endpoint_url`: The REST endpoint url provided by the endpoint\n",
|
||||
"* `deployment_name`: The deployment name of the endpoint"
|
||||
"* `endpoint_api_key`: Required - The API key provided by the endpoint\n",
|
||||
"* `endpoint_url`: Required - The REST endpoint url provided by the endpoint\n",
|
||||
"* `deployment_name`: Not required - The deployment name of the model using the endpoint"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -39,11 +39,14 @@
|
||||
"source": [
|
||||
"## Content Formatter\n",
|
||||
"\n",
|
||||
"The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. Additionally, there are three content formatters already provided:\n",
|
||||
"The `content_formatter` parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a `ContentFormatterBase` class is provided to allow users to transform data to their liking. The following content formatters are provided:\n",
|
||||
"\n",
|
||||
"* `OSSContentFormatter`: Formats request and response data for models from the Open Source category in the Model Catalog. Note, that not all models in the Open Source category may follow the same schema\n",
|
||||
"* `DollyContentFormatter`: Formats request and response data for the `dolly-v2-12b` model\n",
|
||||
"* `GPT2ContentFormatter`: Formats request and response data for GPT2\n",
|
||||
"* `DollyContentFormatter`: Formats request and response data for the Dolly-v2\n",
|
||||
"* `HFContentFormatter`: Formats request and response data for text-generation Hugging Face models\n",
|
||||
"* `LLamaContentFormatter`: Formats request and response data for LLaMa2\n",
|
||||
"\n",
|
||||
"*Note: `OSSContentFormatter` is being deprecated and replaced with `GPT2ContentFormatter`. The logic is the same but `GPT2ContentFormatter` is a more suitable name. You can still continue to use `OSSContentFormatter` as the changes are backwards compatibile.*\n",
|
||||
"\n",
|
||||
"Below is an example using a summarization model from Hugging Face."
|
||||
]
|
||||
@@ -100,7 +103,6 @@
|
||||
"llm = AzureMLOnlineEndpoint(\n",
|
||||
" endpoint_api_key=os.getenv(\"BART_ENDPOINT_API_KEY\"),\n",
|
||||
" endpoint_url=os.getenv(\"BART_ENDPOINT_URL\"),\n",
|
||||
" deployment_name=\"linydub-bart-large-samsum-3\",\n",
|
||||
" model_kwargs={\"temperature\": 0.8, \"max_new_tokens\": 400},\n",
|
||||
" content_formatter=content_formatter,\n",
|
||||
")\n",
|
||||
@@ -167,7 +169,6 @@
|
||||
"llm = AzureMLOnlineEndpoint(\n",
|
||||
" endpoint_api_key=os.getenv(\"DOLLY_ENDPOINT_API_KEY\"),\n",
|
||||
" endpoint_url=os.getenv(\"DOLLY_ENDPOINT_URL\"),\n",
|
||||
" deployment_name=\"databricks-dolly-v2-12b-4\",\n",
|
||||
" model_kwargs={\"temperature\": 0.8, \"max_tokens\": 300},\n",
|
||||
" content_formatter=content_formatter,\n",
|
||||
")\n",
|
||||
|
||||
375
docs/extras/integrations/llms/edenai.ipynb
Normal file
375
docs/extras/integrations/llms/edenai.ipynb
Normal file
File diff suppressed because one or more lines are too long
@@ -1,18 +1,15 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Google Cloud Platform Vertex AI PaLM \n",
|
||||
"\n",
|
||||
"Note: This is seperate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
|
||||
"Note: This is seperate from the Google PaLM integration, it exposes [Vertex AI PaLM API](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) on Google Cloud. \n",
|
||||
"\n",
|
||||
"PaLM API on Vertex AI is a Preview offering, subject to the Pre-GA Offerings Terms of the [GCP Service Specific Terms](https://cloud.google.com/terms/service-terms). \n",
|
||||
"\n",
|
||||
"Pre-GA products and features may have limited support, and changes to pre-GA products and features may not be compatible with other pre-GA versions. For more information, see the [launch stage descriptions](https://cloud.google.com/products#product-launch-stages). Further, by using PaLM API on Vertex AI, you agree to the Generative AI Preview [terms and conditions](https://cloud.google.com/trustedtester/aitos) (Preview Terms).\n",
|
||||
"\n",
|
||||
"For PaLM API on Vertex AI, you can process personal data as outlined in the Cloud Data Processing Addendum, subject to applicable restrictions and obligations in the Agreement (as defined in the Preview Terms).\n",
|
||||
"By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) Customer Data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n",
|
||||
"\n",
|
||||
"To use Vertex AI PaLM you must have the `google-cloud-aiplatform` Python package installed and either:\n",
|
||||
"- Have credentials configured for your environment (gcloud, workload identity, etc...)\n",
|
||||
@@ -101,6 +98,7 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
|
||||
@@ -216,7 +216,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.experimental.llms import JsonFormer\n",
|
||||
"from langchain_experimental.llms import JsonFormer\n",
|
||||
"\n",
|
||||
"json_former = JsonFormer(json_schema=decoder_schema, pipeline=hf_model)"
|
||||
]
|
||||
|
||||
176
docs/extras/integrations/llms/minimax.ipynb
Normal file
176
docs/extras/integrations/llms/minimax.ipynb
Normal file
@@ -0,0 +1,176 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Minimax\n",
|
||||
"\n",
|
||||
"[Minimax](https://api.minimax.chat) is a Chinese startup that provides natural language processing models for companies and individuals.\n",
|
||||
"\n",
|
||||
"This example demonstrates using Langchain to interact with Minimax."
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Setup\n",
|
||||
"\n",
|
||||
"To run this notebook, you'll need a [Minimax account](https://api.minimax.chat), an [API key](https://api.minimax.chat/user-center/basic-information/interface-key), and a [Group ID](https://api.minimax.chat/user-center/basic-information)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Single model call"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 33,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import Minimax"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 34,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load the model\n",
|
||||
"minimax = Minimax(minimax_api_key=\"YOUR_API_KEY\", minimax_group_id=\"YOUR_GROUP_ID\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"pycharm": {
|
||||
"is_executing": true
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Prompt the model\n",
|
||||
"minimax(\"What is the difference between panda and bear?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Chained model calls"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# get api_key and group_id: https://api.minimax.chat/user-center/basic-information\n",
|
||||
"# We need `MINIMAX_API_KEY` and `MINIMAX_GROUP_ID`\n",
|
||||
"\n",
|
||||
"import os\n",
|
||||
"\n",
|
||||
"os.environ[\"MINIMAX_API_KEY\"] = \"YOUR_API_KEY\"\n",
|
||||
"os.environ[\"MINIMAX_GROUP_ID\"] = \"YOUR_GROUP_ID\""
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import Minimax\n",
|
||||
"from langchain import PromptTemplate, LLMChain"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"template = \"\"\"Question: {question}\n",
|
||||
"\n",
|
||||
"Answer: Let's think step by step.\"\"\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = Minimax()"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"question = \"What NBA team won the Championship in the year Jay Zhou was born?\"\n",
|
||||
"\n",
|
||||
"llm_chain.run(question)"
|
||||
],
|
||||
"metadata": {
|
||||
"collapsed": false
|
||||
}
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.4"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -16,7 +16,9 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Install petals\n",
|
||||
"The `petals` package is required to use the Petals API. Install `petals` using `pip3 install petals`."
|
||||
"The `petals` package is required to use the Petals API. Install `petals` using `pip3 install petals`.\n",
|
||||
"\n",
|
||||
"For Apple Silicon(M1/M2) users please follow this guide [https://github.com/bigscience-workshop/petals/issues/147#issuecomment-1365379642](https://github.com/bigscience-workshop/petals/issues/147#issuecomment-1365379642) to install petals "
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -62,7 +64,7 @@
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdin",
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" ········\n"
|
||||
|
||||
@@ -162,7 +162,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.experimental.llms import RELLM\n",
|
||||
"from langchain_experimental.llms import RELLM\n",
|
||||
"\n",
|
||||
"model = RELLM(pipeline=hf_model, regex=pattern, max_new_tokens=200)\n",
|
||||
"\n",
|
||||
|
||||
176
docs/extras/integrations/llms/xinference.ipynb
Normal file
176
docs/extras/integrations/llms/xinference.ipynb
Normal file
@@ -0,0 +1,176 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Xorbits Inference (Xinference)\n",
|
||||
"\n",
|
||||
"[Xinference](https://github.com/xorbitsai/inference) is a powerful and versatile library designed to serve LLMs, \n",
|
||||
"speech recognition models, and multimodal models, even on your laptop. It supports a variety of models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, and many others. This notebook demonstrates how to use Xinference with LangChain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Installation\n",
|
||||
"\n",
|
||||
"Install `Xinference` through PyPI:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install \"xinference[all]\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Deploy Xinference Locally or in a Distributed Cluster.\n",
|
||||
"\n",
|
||||
"For local deployment, run `xinference`. \n",
|
||||
"\n",
|
||||
"To deploy Xinference in a cluster, first start an Xinference supervisor using the `xinference-supervisor`. You can also use the option -p to specify the port and -H to specify the host. The default port is 9997.\n",
|
||||
"\n",
|
||||
"Then, start the Xinference workers using `xinference-worker` on each server you want to run them on. \n",
|
||||
"\n",
|
||||
"You can consult the README file from [Xinference](https://github.com/xorbitsai/inference) for more information.\n",
|
||||
"## Wrapper\n",
|
||||
"\n",
|
||||
"To use Xinference with LangChain, you need to first launch a model. You can use command line interface (CLI) to do so:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Model uid: 7167b2b0-2a04-11ee-83f0-d29396a3f064\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"!xinference launch -n vicuna-v1.3 -f ggmlv3 -q q4_0"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"A model UID is returned for you to use. Now you can use Xinference with LangChain:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' You can visit the Eiffel Tower, Notre-Dame Cathedral, the Louvre Museum, and many other historical sites in Paris, the capital of France.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 14,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.llms import Xinference\n",
|
||||
"\n",
|
||||
"llm = Xinference(\n",
|
||||
" server_url=\"http://0.0.0.0:9997\",\n",
|
||||
" model_uid = \"7167b2b0-2a04-11ee-83f0-d29396a3f064\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"llm(\n",
|
||||
" prompt=\"Q: where can we visit in the capital of France? A:\",\n",
|
||||
" generate_config={\"max_tokens\": 1024, \"stream\": True},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Integrate with a LLMChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"A: You can visit many places in Paris, such as the Eiffel Tower, the Louvre Museum, Notre-Dame Cathedral, the Champs-Elysées, Montmartre, Sacré-Cœur, and the Palace of Versailles.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain import PromptTemplate, LLMChain\n",
|
||||
"\n",
|
||||
"template = \"Where can we visit in the capital of {country}?\"\n",
|
||||
"\n",
|
||||
"prompt = PromptTemplate(template=template, input_variables=[\"country\"])\n",
|
||||
"\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"\n",
|
||||
"generated = llm_chain.run(country=\"France\")\n",
|
||||
"print(generated)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Lastly, terminate the model when you do not need to use it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!xinference terminate --model-uid \"7167b2b0-2a04-11ee-83f0-d29396a3f064\""
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "myenv3.9",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.11"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "91c6a7ef",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Streamlit Chat Message History\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use Streamlit to store chat message history. Note, StreamlitChatMessageHistory only works when run in a Streamlit app. For more on Streamlit check out their\n",
|
||||
"[getting started documentation](https://docs.streamlit.io/library/get-started)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d15e3302",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.memory import StreamlitChatMessageHistory\n",
|
||||
"\n",
|
||||
"history = StreamlitChatMessageHistory(\"foo\")\n",
|
||||
"\n",
|
||||
"history.add_user_message(\"hi!\")\n",
|
||||
"history.add_ai_message(\"whats up?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "64fc465e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"history.messages"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -13,7 +13,7 @@ pip install python-arango
|
||||
|
||||
Connect your ArangoDB Database with a Chat Model to get insights on your data.
|
||||
|
||||
See the notebook example [here](/docs/modules/chains/additional/graph_arangodb_qa.html).
|
||||
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa.html).
|
||||
|
||||
```python
|
||||
from arango import ArangoClient
|
||||
|
||||
@@ -22,7 +22,7 @@ If you don't you can refer to [Argilla - 🚀 Quickstart](https://docs.argilla.i
|
||||
|
||||
## Tracking
|
||||
|
||||
See a [usage example of `ArgillaCallbackHandler`](/docs/modules/callbacks/integrations/argilla.html).
|
||||
See a [usage example of `ArgillaCallbackHandler`](/docs/integrations/callbacks/argilla.html).
|
||||
|
||||
```python
|
||||
from langchain.callbacks import ArgillaCallbackHandler
|
||||
|
||||
@@ -28,7 +28,7 @@ from langchain.memory import CassandraChatMessageHistory
|
||||
|
||||
## Memory
|
||||
|
||||
See a [usage example](/docs/modules/memory/integrations/cassandra_chat_message_history).
|
||||
See a [usage example](/docs/integrations/memory/cassandra_chat_message_history).
|
||||
|
||||
```python
|
||||
from langchain.memory import CassandraChatMessageHistory
|
||||
|
||||
@@ -166,7 +166,7 @@
|
||||
"source": [
|
||||
"### SQL Database Agent example\n",
|
||||
"\n",
|
||||
"This example demonstrates the use of the [SQL Database Agent](/docs/modules/agents/toolkits/sql_database.html) for answering questions over a Databricks database."
|
||||
"This example demonstrates the use of the [SQL Database Agent](/docs/integrations/toolkits/sql_database.html) for answering questions over a Databricks database."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -32,11 +32,11 @@ See [MLflow AI Gateway](/docs/ecosystem/integrations/mlflow_ai_gateway).
|
||||
Databricks as an LLM provider
|
||||
-----------------------------
|
||||
|
||||
The notebook [Wrap Databricks endpoints as LLMs](/docs/modules/model_io/models/llms/integrations/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
|
||||
The notebook [Wrap Databricks endpoints as LLMs](/docs/integrations/llms/databricks.html) illustrates the method to wrap Databricks endpoints as LLMs in LangChain. It supports two types of endpoints: the serving endpoint, which is recommended for both production and development, and the cluster driver proxy app, which is recommended for interactive development.
|
||||
|
||||
Databricks endpoints support Dolly, but are also great for hosting models like MPT-7B or any other models from the Hugging Face ecosystem. Databricks endpoints can also be used with proprietary models like OpenAI to provide a governance layer for enterprises.
|
||||
|
||||
Databricks Dolly
|
||||
----------------
|
||||
|
||||
Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/modules/model_io/models/llms/integrations/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.
|
||||
Databricks’ Dolly is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. The model is available on Hugging Face Hub as databricks/dolly-v2-12b. See the notebook [Hugging Face Hub](/docs/integrations/llms/huggingface_hub.html) for instructions to access it through the Hugging Face Hub integration with LangChain.
|
||||
|
||||
@@ -13,7 +13,7 @@ This page provides instructions on how to use the DataForSEO search APIs within
|
||||
The DataForSEO utility wraps the API. To import this utility, use:
|
||||
|
||||
```python
|
||||
from langchain.utilities import DataForSeoAPIWrapper
|
||||
from langchain.utilities.dataforseo_api_search import DataForSeoAPIWrapper
|
||||
```
|
||||
|
||||
For a detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/dataforseo.ipynb).
|
||||
|
||||
22
docs/extras/integrations/providers/fireworks.md
Normal file
22
docs/extras/integrations/providers/fireworks.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Fireworks
|
||||
|
||||
This page covers how to use the Fireworks models within Langchain.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
- To use the Fireworks model, you need to have a Fireworks API key. To generate one, sign up at platform.fireworks.ai
|
||||
- Authenticate by setting the FIREWORKS_API_KEY environment variable.
|
||||
|
||||
## LLM
|
||||
|
||||
Fireworks integrates with Langchain through the LLM module, which allows for standardized usage of any models deployed on the Fireworks models.
|
||||
|
||||
In this example, we'll work the llama-v2-13b.
|
||||
```python
|
||||
from langchain.llms.fireworks import Fireworks
|
||||
|
||||
llm = Fireworks(model="fireworks-llama-v2-13b-chat", max_tokens=256, temperature=0.4)
|
||||
llm("Name 3 sports.")
|
||||
```
|
||||
|
||||
For a more detailed walkthrough, see [here](/docs/extras/modules/model_io/models/llms/integrations/Fireworks.ipynb).
|
||||
25
docs/extras/integrations/providers/minimax.mdx
Normal file
25
docs/extras/integrations/providers/minimax.mdx
Normal file
@@ -0,0 +1,25 @@
|
||||
# Minimax
|
||||
|
||||
>[Minimax](https://api.minimax.chat) is a Chinese startup that provides natural language processing models
|
||||
> for companies and individuals.
|
||||
|
||||
## Installation and Setup
|
||||
Get a [Minimax api key](https://api.minimax.chat/user-center/basic-information/interface-key) and set it as an environment variable (`MINIMAX_API_KEY`)
|
||||
Get a [Minimax group id](https://api.minimax.chat/user-center/basic-information) and set it as an environment variable (`MINIMAX_GROUP_ID`)
|
||||
|
||||
|
||||
## LLM
|
||||
|
||||
There exists a Minimax LLM wrapper, which you can access with
|
||||
See a [usage example](/docs/modules/model_io/models/llms/integrations/minimax.html).
|
||||
|
||||
```python
|
||||
from langchain.llms import Minimax
|
||||
```
|
||||
|
||||
## Text Embedding Model
|
||||
|
||||
There exists a Minimax Embedding model, which you can access with
|
||||
```python
|
||||
from langchain.embeddings import MiniMaxEmbeddings
|
||||
```
|
||||
@@ -1,6 +1,6 @@
|
||||
# MLflow AI Gateway
|
||||
|
||||
The MLflow AI Gateway service is a powerful tool designed to streamline the usage and management of various large language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index.html) for more details.
|
||||
>`The MLflow AI Gateway` service is a powerful tool designed to streamline the usage and management of various large language model (LLM) providers, such as OpenAI and Anthropic, within an organization. It offers a high-level interface that simplifies the interaction with these services by providing a unified endpoint to handle specific LLM related requests. See [the MLflow AI Gateway documentation](https://mlflow.org/docs/latest/gateway/index.html) for more details.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
|
||||
@@ -1,19 +1,49 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "5d184f91",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# MLflow\n",
|
||||
"\n",
|
||||
"This notebook goes over how to track your LangChain experiments into your MLflow Server"
|
||||
],
|
||||
"id": "5d184f91"
|
||||
">[MLflow](https://www.mlflow.org/docs/latest/what-is-mlflow.html) is a versatile, expandable, open-source platform for managing workflows and artifacts across the machine learning lifecycle. It has built-in integrations with many popular ML libraries, but can be used with any library, algorithm, or deployment tool. It is designed to be extensible, so you can write plugins to support new workflows, libraries, and tools.\n",
|
||||
"\n",
|
||||
"This notebook goes over how to track your LangChain experiments into your `MLflow Server`"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ea73efae-7182-4a89-a492-c865b1fcf981",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## External examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "97361a84-4e8f-45ba-b291-814cf73cd8f2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"`MLflow` provides [several examples](https://github.com/mlflow/mlflow/tree/master/examples/langchain) for the `LangChain` integration:\n",
|
||||
"- [simple_chain](https://github.com/mlflow/mlflow/blob/master/examples/langchain/simple_chain.py)\n",
|
||||
"- [simple_agent](https://github.com/mlflow/mlflow/blob/master/examples/langchain/simple_agent.py)\n",
|
||||
"- [retriever_chain](https://github.com/mlflow/mlflow/blob/master/examples/langchain/retriever_chain.py)\n",
|
||||
"- [retrieval_qa_chain](https://github.com/mlflow/mlflow/blob/master/examples/langchain/retrieval_qa_chain.py)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e0cbd74b-1542-45a4-a72b-b2eedeffd2e0",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "ca7bd72f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -24,12 +54,12 @@
|
||||
"!pip install openai\n",
|
||||
"!pip install google-search-results\n",
|
||||
"!python -m spacy download en_core_web_sm"
|
||||
],
|
||||
"id": "ca7bd72f"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "bf8e1f5c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -38,23 +68,23 @@
|
||||
"os.environ[\"MLFLOW_TRACKING_URI\"] = \"\"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
|
||||
"os.environ[\"SERPAPI_API_KEY\"] = \"\""
|
||||
],
|
||||
"id": "bf8e1f5c"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "fd49fd45",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks import MlflowCallbackHandler\n",
|
||||
"from langchain.llms import OpenAI"
|
||||
],
|
||||
"id": "fd49fd45"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "578cac8c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -70,12 +100,12 @@
|
||||
"llm = OpenAI(\n",
|
||||
" model_name=\"gpt-3.5-turbo\", temperature=0, callbacks=[mlflow_callback], verbose=True\n",
|
||||
")"
|
||||
],
|
||||
"id": "578cac8c"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9b20acae",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -83,23 +113,23 @@
|
||||
"llm_result = llm.generate([\"Tell me a joke\"])\n",
|
||||
"\n",
|
||||
"mlflow_callback.flush_tracker(llm)"
|
||||
],
|
||||
"id": "9b20acae"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "8b872046",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain"
|
||||
],
|
||||
"id": "8b872046"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "1b2627ef",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
@@ -117,12 +147,12 @@
|
||||
"]\n",
|
||||
"synopsis_chain.apply(test_prompts)\n",
|
||||
"mlflow_callback.flush_tracker(synopsis_chain)"
|
||||
],
|
||||
"id": "1b2627ef"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e002823a",
|
||||
"metadata": {
|
||||
"id": "_jN73xcPVEpI"
|
||||
},
|
||||
@@ -130,12 +160,12 @@
|
||||
"source": [
|
||||
"from langchain.agents import initialize_agent, load_tools\n",
|
||||
"from langchain.agents import AgentType"
|
||||
],
|
||||
"id": "e002823a"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "655bd47e",
|
||||
"metadata": {
|
||||
"id": "Gpq4rk6VT9cu"
|
||||
},
|
||||
@@ -154,8 +184,7 @@
|
||||
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
|
||||
")\n",
|
||||
"mlflow_callback.flush_tracker(agent, finish=True)"
|
||||
],
|
||||
"id": "655bd47e"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -177,9 +206,9 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.10.12"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
}
|
||||
|
||||
@@ -51,4 +51,4 @@ Momento can be used as a distributed memory store for LLMs.
|
||||
|
||||
### Chat Message History Memory
|
||||
|
||||
See [this notebook](/docs/modules/memory/integrations/momento_chat_message_history.html) for a walkthrough of how to use Momento as a memory store for chat message history.
|
||||
See [this notebook](/docs/integrations/memory/momento_chat_message_history.html) for a walkthrough of how to use Momento as a memory store for chat message history.
|
||||
|
||||
@@ -31,7 +31,7 @@ db = SQLDatabase.from_uri(conn_str)
|
||||
db_chain = SQLDatabaseChain.from_llm(OpenAI(temperature=0), db, verbose=True)
|
||||
```
|
||||
|
||||
From here, see the [SQL Chain](/docs/modules/chains/popular/sqlite.html) documentation on how to use.
|
||||
From here, see the [SQL Chain](/docs/use_cases/tabular/sqlite.html) documentation on how to use.
|
||||
|
||||
|
||||
## LLMCache
|
||||
|
||||
@@ -58,7 +58,7 @@ For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_
|
||||
|
||||
## Chain
|
||||
|
||||
See a [usage example](/docs/modules/chains/additional/moderation).
|
||||
See a [usage example](/docs/guides/safety/moderation).
|
||||
|
||||
```python
|
||||
from langchain.chains import OpenAIModerationChain
|
||||
|
||||
@@ -177,8 +177,9 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chains import TransformChain, SQLDatabaseChain, SimpleSequentialChain\n",
|
||||
"from langchain.sql_database import SQLDatabase"
|
||||
"from langchain.chains import TransformChain, SimpleSequentialChain\n",
|
||||
"from langchain.sql_database import SQLDatabase\n",
|
||||
"from langchain_experimental.sql import SQLDatabaseChain"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -106,4 +106,4 @@ Redis can be used to persist LLM conversations.
|
||||
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/integrations/vectorstore_retriever_memory.html).
|
||||
|
||||
#### Chat Message History Memory
|
||||
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/modules/memory/integrations/redis_chat_message_history.html).
|
||||
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history.html).
|
||||
|
||||
@@ -15,7 +15,7 @@ pip install rockset
|
||||
See a [usage example](/docs/integrations/vectorstores/rockset).
|
||||
|
||||
```python
|
||||
from langchain.vectorstores import RocksetDB
|
||||
from langchain.vectorstores import Rockset
|
||||
```
|
||||
|
||||
## Document Loader
|
||||
|
||||
916
docs/extras/integrations/providers/sagemaker_tracking.ipynb
Normal file
916
docs/extras/integrations/providers/sagemaker_tracking.ipynb
Normal file
@@ -0,0 +1,916 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ef3909cf-72ca-4841-85c6-ef4e0eae3aaf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# SageMaker Tracking\n",
|
||||
"\n",
|
||||
"This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability:\n",
|
||||
"* **Scenario 1**: *Single LLM* - A case where a single LLM model is used to generate output based on a given prompt.\n",
|
||||
"* **Scenario 2**: *Sequential Chain* - A case where a sequential chain of two LLM models is used.\n",
|
||||
"* **Scenario 3**: *Agent with Tools (Chain of Thought)* - A case where multiple tools (search and math) are used in addition to an LLM.\n",
|
||||
"\n",
|
||||
"[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
|
||||
"\n",
|
||||
"[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of Amazon SageMaker that lets you organize, track, compare and evaluate ML experiments and model versions.\n",
|
||||
"\n",
|
||||
"In this notebook, we will create a single experiment to log the prompts from each scenario."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "94c22cb4-3b1c-432b-b3be-0235eec79c5c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Installation and Setup"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2353436d-17fe-4f58-a2f9-c299d56393fd",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install sagemaker\n",
|
||||
"!pip install openai\n",
|
||||
"!pip install google-search-results"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "65dcf62e-7a38-4119-adb9-d9e884e82499",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"First, setup the required API keys\n",
|
||||
"* OpenAI: https://platform.openai.com/account/api-keys (For OpenAI LLM model)\n",
|
||||
"* Google SERP API: https://serpapi.com/manage-api-key (For Google Search Tool)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5ec2b898-0cfc-4308-8e86-569cd7b7cf41",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"## Add your API keys below\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = \"<ADD-KEY-HERE>\"\n",
|
||||
"os.environ[\"SERPAPI_API_KEY\"] = \"<ADD-KEY-HERE>\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "80968ebf-519f-46de-8703-97532ac39e3e",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"from langchain.chains import LLMChain, SimpleSequentialChain\n",
|
||||
"from langchain.agents import initialize_agent, load_tools\n",
|
||||
"from langchain.agents import Tool\n",
|
||||
"from langchain.callbacks import SageMakerCallbackHandler\n",
|
||||
"\n",
|
||||
"from sagemaker.analytics import ExperimentAnalytics\n",
|
||||
"from sagemaker.session import Session\n",
|
||||
"from sagemaker.experiments.run import Run"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b67d031f-a01f-4009-ad29-c80ab8ad50ea",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## LLM Prompt Tracking"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "da2d70ee-173b-469d-a718-54c33d862844",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#LLM Hyperparameters\n",
|
||||
"HPARAMS = {\n",
|
||||
" \"temperature\": 0.1,\n",
|
||||
" \"model_name\": \"text-davinci-003\",\n",
|
||||
"}\n",
|
||||
"\n",
|
||||
"#Bucket used to save prompt logs (Use `None` is used to save the default bucket or otherwise change it)\n",
|
||||
"BUCKET_NAME = None\n",
|
||||
"\n",
|
||||
"#Experiment name\n",
|
||||
"EXPERIMENT_NAME = \"langchain-sagemaker-tracker\"\n",
|
||||
"\n",
|
||||
"#Create SageMaker Session with the given bucket\n",
|
||||
"session = Session(default_bucket=BUCKET_NAME)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7239a39a-08d8-43cb-8922-81abdd5d9ebf",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scenario 1 - LLM"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "abc00335-50c8-4119-adb8-4c4ab8522e23",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"RUN_NAME = \"run-scenario-1\"\n",
|
||||
"PROMPT_TEMPLATE = \"tell me a joke about {topic}\"\n",
|
||||
"INPUT_VARIABLES = {\"topic\": \"fish\"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "4a3a3cbe-db85-4255-8d8b-eaafdca8c6e2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:\n",
|
||||
"\n",
|
||||
" # Create SageMaker Callback\n",
|
||||
" sagemaker_callback = SageMakerCallbackHandler(run)\n",
|
||||
"\n",
|
||||
" # Define LLM model with callback\n",
|
||||
" llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)\n",
|
||||
"\n",
|
||||
" # Create prompt template\n",
|
||||
" prompt = PromptTemplate.from_template(template=PROMPT_TEMPLATE)\n",
|
||||
"\n",
|
||||
" # Create LLM Chain\n",
|
||||
" chain = LLMChain(llm=llm, prompt=prompt, callbacks=[sagemaker_callback])\n",
|
||||
"\n",
|
||||
" # Run chain\n",
|
||||
" chain.run(**INPUT_VARIABLES)\n",
|
||||
"\n",
|
||||
" # Reset the callback\n",
|
||||
" sagemaker_callback.flush_tracker()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7dc69934-9f42-40b7-9931-36a3371a38da",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scenario 2 - Sequential Chain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "50b75ef9-9825-4ccc-8414-4cd7525a1b68",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"RUN_NAME = \"run-scenario-2\"\n",
|
||||
"\n",
|
||||
"PROMPT_TEMPLATE_1 = \"\"\"You are a playwright. Given the title of play, it is your job to write a synopsis for that title.\n",
|
||||
"Title: {title}\n",
|
||||
"Playwright: This is a synopsis for the above play:\"\"\"\n",
|
||||
"PROMPT_TEMPLATE_2 = \"\"\"You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.\n",
|
||||
"Play Synopsis: {synopsis}\n",
|
||||
"Review from a New York Times play critic of the above play:\"\"\"\n",
|
||||
"\n",
|
||||
"INPUT_VARIABLES = {\n",
|
||||
" \"input\": \"documentary about good video games that push the boundary of game design\"\n",
|
||||
"}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "fb7fff5f-e89f-40e2-96b4-3641a0b6e9b4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:\n",
|
||||
"\n",
|
||||
" # Create SageMaker Callback\n",
|
||||
" sagemaker_callback = SageMakerCallbackHandler(run)\n",
|
||||
"\n",
|
||||
" # Create prompt templates for the chain\n",
|
||||
" prompt_template1 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_1)\n",
|
||||
" prompt_template2 = PromptTemplate.from_template(template=PROMPT_TEMPLATE_2)\n",
|
||||
"\n",
|
||||
" # Define LLM model with callback\n",
|
||||
" llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)\n",
|
||||
"\n",
|
||||
" # Create chain1\n",
|
||||
" chain1 = LLMChain(llm=llm, prompt=prompt_template1, callbacks=[sagemaker_callback])\n",
|
||||
"\n",
|
||||
" # Create chain2\n",
|
||||
" chain2 = LLMChain(llm=llm, prompt=prompt_template2, callbacks=[sagemaker_callback])\n",
|
||||
"\n",
|
||||
" # Create Sequential chain\n",
|
||||
" overall_chain = SimpleSequentialChain(chains=[chain1, chain2], callbacks=[sagemaker_callback])\n",
|
||||
"\n",
|
||||
" # Run overall sequential chain\n",
|
||||
" overall_chain.run(**INPUT_VARIABLES)\n",
|
||||
"\n",
|
||||
" # Reset the callback\n",
|
||||
" sagemaker_callback.flush_tracker()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6b82bd0e-c626-4797-bb06-c1983f176315",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scenario 3 - Agent with Tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "b5066f03-49dc-4868-be8e-d21ce22063fe",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"RUN_NAME = \"run-scenario-3\"\n",
|
||||
"PROMPT_TEMPLATE = \"Who is the oldest person alive? And what is their current age raised to the power of 1.51?\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "98385c42-9e44-4b03-b76d-007cb4797864",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"with Run(experiment_name=EXPERIMENT_NAME, run_name=RUN_NAME, sagemaker_session=session) as run:\n",
|
||||
"\n",
|
||||
" # Create SageMaker Callback\n",
|
||||
" sagemaker_callback = SageMakerCallbackHandler(run)\n",
|
||||
"\n",
|
||||
" # Define LLM model with callback\n",
|
||||
" llm = OpenAI(callbacks=[sagemaker_callback], **HPARAMS)\n",
|
||||
"\n",
|
||||
" # Define tools\n",
|
||||
" tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callbacks=[sagemaker_callback])\n",
|
||||
"\n",
|
||||
" # Initialize agent with all the tools\n",
|
||||
" agent = initialize_agent(tools, llm, agent=\"zero-shot-react-description\", callbacks=[sagemaker_callback])\n",
|
||||
"\n",
|
||||
" # Run agent\n",
|
||||
" agent.run(input=PROMPT_TEMPLATE)\n",
|
||||
"\n",
|
||||
" # Reset the callback\n",
|
||||
" sagemaker_callback.flush_tracker()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c306a1c9-99f8-476d-96db-347746f5cfe0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"## Load Log Data\n",
|
||||
"\n",
|
||||
"Once the prompts are logged, we can easily load and convert them to Pandas DataFrame as follows."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "ec7b4af2-e01d-4f6c-9de5-70d2b4acb9e6",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"#Load\n",
|
||||
"logs = ExperimentAnalytics(experiment_name=EXPERIMENT_NAME)\n",
|
||||
"\n",
|
||||
"#Convert as pandas dataframe\n",
|
||||
"df = logs.dataframe(force_refresh=True)\n",
|
||||
"\n",
|
||||
"print(df.shape)\n",
|
||||
"df.head()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "29991c75-f9cf-4c36-abfd-903c09fb170d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"As can be seen above, there are three runs (rows) in the experiment corresponding to each scenario. Each run logs the prompts and related LLM settings/hyperparameters as json and are saved in s3 bucket. Feel free to load and explore the log data from each json path."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "61a695d6-0aef-4284-9e12-eea8bc143dbd",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"availableInstances": [
|
||||
{
|
||||
"_defaultOrder": 0,
|
||||
"_isFastLaunch": true,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 4,
|
||||
"name": "ml.t3.medium",
|
||||
"vcpuNum": 2
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 1,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 8,
|
||||
"name": "ml.t3.large",
|
||||
"vcpuNum": 2
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 2,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 16,
|
||||
"name": "ml.t3.xlarge",
|
||||
"vcpuNum": 4
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 3,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 32,
|
||||
"name": "ml.t3.2xlarge",
|
||||
"vcpuNum": 8
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 4,
|
||||
"_isFastLaunch": true,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 8,
|
||||
"name": "ml.m5.large",
|
||||
"vcpuNum": 2
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 5,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 16,
|
||||
"name": "ml.m5.xlarge",
|
||||
"vcpuNum": 4
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 6,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 32,
|
||||
"name": "ml.m5.2xlarge",
|
||||
"vcpuNum": 8
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 7,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 64,
|
||||
"name": "ml.m5.4xlarge",
|
||||
"vcpuNum": 16
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 8,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 128,
|
||||
"name": "ml.m5.8xlarge",
|
||||
"vcpuNum": 32
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 9,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 192,
|
||||
"name": "ml.m5.12xlarge",
|
||||
"vcpuNum": 48
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 10,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 256,
|
||||
"name": "ml.m5.16xlarge",
|
||||
"vcpuNum": 64
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 11,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 384,
|
||||
"name": "ml.m5.24xlarge",
|
||||
"vcpuNum": 96
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 12,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 8,
|
||||
"name": "ml.m5d.large",
|
||||
"vcpuNum": 2
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 13,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 16,
|
||||
"name": "ml.m5d.xlarge",
|
||||
"vcpuNum": 4
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 14,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 32,
|
||||
"name": "ml.m5d.2xlarge",
|
||||
"vcpuNum": 8
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 15,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 64,
|
||||
"name": "ml.m5d.4xlarge",
|
||||
"vcpuNum": 16
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 16,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 128,
|
||||
"name": "ml.m5d.8xlarge",
|
||||
"vcpuNum": 32
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 17,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 192,
|
||||
"name": "ml.m5d.12xlarge",
|
||||
"vcpuNum": 48
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 18,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 256,
|
||||
"name": "ml.m5d.16xlarge",
|
||||
"vcpuNum": 64
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 19,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 384,
|
||||
"name": "ml.m5d.24xlarge",
|
||||
"vcpuNum": 96
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 20,
|
||||
"_isFastLaunch": false,
|
||||
"category": "General purpose",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": true,
|
||||
"memoryGiB": 0,
|
||||
"name": "ml.geospatial.interactive",
|
||||
"supportedImageNames": [
|
||||
"sagemaker-geospatial-v1-0"
|
||||
],
|
||||
"vcpuNum": 0
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 21,
|
||||
"_isFastLaunch": true,
|
||||
"category": "Compute optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 4,
|
||||
"name": "ml.c5.large",
|
||||
"vcpuNum": 2
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 22,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Compute optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 8,
|
||||
"name": "ml.c5.xlarge",
|
||||
"vcpuNum": 4
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 23,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Compute optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 16,
|
||||
"name": "ml.c5.2xlarge",
|
||||
"vcpuNum": 8
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 24,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Compute optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 32,
|
||||
"name": "ml.c5.4xlarge",
|
||||
"vcpuNum": 16
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 25,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Compute optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 72,
|
||||
"name": "ml.c5.9xlarge",
|
||||
"vcpuNum": 36
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 26,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Compute optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 96,
|
||||
"name": "ml.c5.12xlarge",
|
||||
"vcpuNum": 48
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 27,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Compute optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 144,
|
||||
"name": "ml.c5.18xlarge",
|
||||
"vcpuNum": 72
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 28,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Compute optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 192,
|
||||
"name": "ml.c5.24xlarge",
|
||||
"vcpuNum": 96
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 29,
|
||||
"_isFastLaunch": true,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 16,
|
||||
"name": "ml.g4dn.xlarge",
|
||||
"vcpuNum": 4
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 30,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 32,
|
||||
"name": "ml.g4dn.2xlarge",
|
||||
"vcpuNum": 8
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 31,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 64,
|
||||
"name": "ml.g4dn.4xlarge",
|
||||
"vcpuNum": 16
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 32,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 128,
|
||||
"name": "ml.g4dn.8xlarge",
|
||||
"vcpuNum": 32
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 33,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 4,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 192,
|
||||
"name": "ml.g4dn.12xlarge",
|
||||
"vcpuNum": 48
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 34,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 256,
|
||||
"name": "ml.g4dn.16xlarge",
|
||||
"vcpuNum": 64
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 35,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 61,
|
||||
"name": "ml.p3.2xlarge",
|
||||
"vcpuNum": 8
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 36,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 4,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 244,
|
||||
"name": "ml.p3.8xlarge",
|
||||
"vcpuNum": 32
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 37,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 8,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 488,
|
||||
"name": "ml.p3.16xlarge",
|
||||
"vcpuNum": 64
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 38,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 8,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 768,
|
||||
"name": "ml.p3dn.24xlarge",
|
||||
"vcpuNum": 96
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 39,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Memory Optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 16,
|
||||
"name": "ml.r5.large",
|
||||
"vcpuNum": 2
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 40,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Memory Optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 32,
|
||||
"name": "ml.r5.xlarge",
|
||||
"vcpuNum": 4
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 41,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Memory Optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 64,
|
||||
"name": "ml.r5.2xlarge",
|
||||
"vcpuNum": 8
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 42,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Memory Optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 128,
|
||||
"name": "ml.r5.4xlarge",
|
||||
"vcpuNum": 16
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 43,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Memory Optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 256,
|
||||
"name": "ml.r5.8xlarge",
|
||||
"vcpuNum": 32
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 44,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Memory Optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 384,
|
||||
"name": "ml.r5.12xlarge",
|
||||
"vcpuNum": 48
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 45,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Memory Optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 512,
|
||||
"name": "ml.r5.16xlarge",
|
||||
"vcpuNum": 64
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 46,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Memory Optimized",
|
||||
"gpuNum": 0,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 768,
|
||||
"name": "ml.r5.24xlarge",
|
||||
"vcpuNum": 96
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 47,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 16,
|
||||
"name": "ml.g5.xlarge",
|
||||
"vcpuNum": 4
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 48,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 32,
|
||||
"name": "ml.g5.2xlarge",
|
||||
"vcpuNum": 8
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 49,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 64,
|
||||
"name": "ml.g5.4xlarge",
|
||||
"vcpuNum": 16
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 50,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 128,
|
||||
"name": "ml.g5.8xlarge",
|
||||
"vcpuNum": 32
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 51,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 1,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 256,
|
||||
"name": "ml.g5.16xlarge",
|
||||
"vcpuNum": 64
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 52,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 4,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 192,
|
||||
"name": "ml.g5.12xlarge",
|
||||
"vcpuNum": 48
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 53,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 4,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 384,
|
||||
"name": "ml.g5.24xlarge",
|
||||
"vcpuNum": 96
|
||||
},
|
||||
{
|
||||
"_defaultOrder": 54,
|
||||
"_isFastLaunch": false,
|
||||
"category": "Accelerated computing",
|
||||
"gpuNum": 8,
|
||||
"hideHardwareSpecs": false,
|
||||
"memoryGiB": 768,
|
||||
"name": "ml.g5.48xlarge",
|
||||
"vcpuNum": 192
|
||||
}
|
||||
],
|
||||
"instance_type": "ml.t3.large",
|
||||
"kernelspec": {
|
||||
"display_name": "conda_pytorch_p310",
|
||||
"language": "python",
|
||||
"name": "conda_pytorch_p310"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -16,5 +16,5 @@ pip install spacy
|
||||
See a [usage example](/docs/modules/data_connection/document_transformers/text_splitters/split_by_token.html#spacy).
|
||||
|
||||
```python
|
||||
from langchain.llms import SpacyTextSplitter
|
||||
from langchain.text_splitter import SpacyTextSplitter
|
||||
```
|
||||
|
||||
@@ -11,7 +11,9 @@ ecosystem within LangChain.
|
||||
If you are using a loader that runs locally, use the following steps to get `unstructured` and
|
||||
its dependencies running locally.
|
||||
|
||||
- Install the Python SDK with `pip install "unstructured[local-inference]"`
|
||||
- Install the Python SDK with `pip install unstructured`.
|
||||
- You can install document specific dependencies with extras, i.e. `pip install "unstructured[docx]"`.
|
||||
- To install the dependencies for all document types, use `pip install "unstructured[all-docs]"`.
|
||||
- Install the following system dependencies if they are not already available on your system.
|
||||
Depending on what document types you're parsing, you may not need all of these.
|
||||
- `libmagic-dev` (filetype detection)
|
||||
|
||||
@@ -177,7 +177,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
102
docs/extras/integrations/providers/xinference.mdx
Normal file
102
docs/extras/integrations/providers/xinference.mdx
Normal file
@@ -0,0 +1,102 @@
|
||||
# Xorbits Inference (Xinference)
|
||||
|
||||
This page demonstrates how to use [Xinference](https://github.com/xorbitsai/inference)
|
||||
with LangChain.
|
||||
|
||||
`Xinference` is a powerful and versatile library designed to serve LLMs,
|
||||
speech recognition models, and multimodal models, even on your laptop.
|
||||
With Xorbits Inference, you can effortlessly deploy and serve your or
|
||||
state-of-the-art built-in models using just a single command.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
Xinference can be installed via pip from PyPI:
|
||||
|
||||
```bash
|
||||
pip install "xinference[all]"
|
||||
```
|
||||
|
||||
## LLM
|
||||
|
||||
Xinference supports various models compatible with GGML, including chatglm, baichuan, whisper,
|
||||
vicuna, and orca. To view the builtin models, run the command:
|
||||
|
||||
```bash
|
||||
xinference list --all
|
||||
```
|
||||
|
||||
|
||||
### Wrapper for Xinference
|
||||
|
||||
You can start a local instance of Xinference by running:
|
||||
|
||||
```bash
|
||||
xinference
|
||||
```
|
||||
|
||||
You can also deploy Xinference in a distributed cluster. To do so, first start an Xinference supervisor
|
||||
on the server you want to run it:
|
||||
|
||||
```bash
|
||||
xinference-supervisor -H "${supervisor_host}"
|
||||
```
|
||||
|
||||
|
||||
Then, start the Xinference workers on each of the other servers where you want to run them on:
|
||||
|
||||
```bash
|
||||
xinference-worker -e "http://${supervisor_host}:9997"
|
||||
```
|
||||
|
||||
You can also start a local instance of Xinference by running:
|
||||
|
||||
```bash
|
||||
xinference
|
||||
```
|
||||
|
||||
Once Xinference is running, an endpoint will be accessible for model management via CLI or
|
||||
Xinference client.
|
||||
|
||||
For local deployment, the endpoint will be http://localhost:9997.
|
||||
|
||||
|
||||
For cluster deployment, the endpoint will be http://${supervisor_host}:9997.
|
||||
|
||||
|
||||
Then, you need to launch a model. You can specify the model names and other attributes
|
||||
including model_size_in_billions and quantization. You can use command line interface (CLI) to
|
||||
do it. For example,
|
||||
|
||||
```bash
|
||||
xinference launch -n orca -s 3 -q q4_0
|
||||
```
|
||||
|
||||
A model uid will be returned.
|
||||
|
||||
Example usage:
|
||||
|
||||
```python
|
||||
from langchain.llms import Xinference
|
||||
|
||||
llm = Xinference(
|
||||
server_url="http://0.0.0.0:9997",
|
||||
model_uid = {model_uid} # replace model_uid with the model UID return from launching the model
|
||||
)
|
||||
|
||||
llm(
|
||||
prompt="Q: where can we visit in the capital of France? A:",
|
||||
generate_config={"max_tokens": 1024, "stream": True},
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
For more information and detailed examples, refer to the
|
||||
[example notebook for xinference](../modules/models/llms/integrations/xinference.ipynb)
|
||||
|
||||
### Embeddings
|
||||
|
||||
Xinference also supports embedding queries and documents. See
|
||||
[example notebook for xinference embeddings](../modules/data_connection/text_embedding/integrations/xinference.ipynb)
|
||||
for a more detailed demo.
|
||||
222
docs/extras/integrations/retrievers/re_phrase.ipynb
Normal file
222
docs/extras/integrations/retrievers/re_phrase.ipynb
Normal file
@@ -0,0 +1,222 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e8624be2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# RePhraseQueryRetriever\n",
|
||||
"\n",
|
||||
"Simple retriever that applies an LLM between the user input and the query pass the to retriever.\n",
|
||||
"\n",
|
||||
"It can be used to pre-process the user input in any way.\n",
|
||||
"\n",
|
||||
"The default prompt used in the `from_llm` classmethod:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"DEFAULT_TEMPLATE = \"\"\"You are an assistant tasked with taking a natural language \\\n",
|
||||
"query from a user and converting it into a query for a vectorstore. \\\n",
|
||||
"In this process, you strip out information that is not relevant for \\\n",
|
||||
"the retrieval task. Here is the user query: {question}\"\"\"\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Create a vectorstore."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "1bfa6834",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import WebBaseLoader\n",
|
||||
"\n",
|
||||
"loader = WebBaseLoader(\"https://lilianweng.github.io/posts/2023-06-23-agent/\")\n",
|
||||
"data = loader.load()\n",
|
||||
"\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
|
||||
"all_splits = text_splitter.split_documents(data)\n",
|
||||
"\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "d0b51556",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import logging\n",
|
||||
"\n",
|
||||
"logging.basicConfig()\n",
|
||||
"logging.getLogger(\"langchain.retrievers.re_phraser\").setLevel(logging.INFO)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "20e1e787",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.retrievers import RePhraseQueryRetriever"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "88c0a972",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using the default prompt"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "503994bd",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = ChatOpenAI(temperature=0)\n",
|
||||
"retriever_from_llm = RePhraseQueryRetriever.from_llm(\n",
|
||||
" retriever=vectorstore.as_retriever(), llm=llm\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "8d17ecc9",
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"INFO:langchain.retrievers.re_phraser:Re-phrased question: The user query can be converted into a query for a vectorstore as follows:\n",
|
||||
"\n",
|
||||
"\"approaches to Task Decomposition\"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"docs = retriever_from_llm.get_relevant_documents(\n",
|
||||
" \"Hi I'm Lance. What are the approaches to Task Decomposition?\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "76d54f1a",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"INFO:langchain.retrievers.re_phraser:Re-phrased question: Query for vectorstore: \"Types of Memory\"\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"docs = retriever_from_llm.get_relevant_documents(\n",
|
||||
" \"I live in San Francisco. What are the Types of Memory?\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0513a6e2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Supply a prompt"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "410d6a64",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain import LLMChain\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"QUERY_PROMPT = PromptTemplate(\n",
|
||||
" input_variables=[\"question\"],\n",
|
||||
" template=\"\"\"You are an assistant tasked with taking a natural languge query from a user\n",
|
||||
" and converting it into a query for a vectorstore. In the process, strip out all \n",
|
||||
" information that is not relevant for the retrieval task and return a new, simplified\n",
|
||||
" question for vectorstore retrieval. The new user query should be in pirate speech.\n",
|
||||
" Here is the user query: {question} \"\"\",\n",
|
||||
")\n",
|
||||
"llm = ChatOpenAI(temperature=0)\n",
|
||||
"llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "2dbffdd3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"retriever_from_llm_chain = RePhraseQueryRetriever(\n",
|
||||
" retriever=vectorstore.as_retriever(), llm_chain=llm_chain\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "103b4be3",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"INFO:langchain.retrievers.re_phraser:Re-phrased question: Ahoy matey! What be Maximum Inner Product Search, ye scurvy dog?\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"docs = retriever_from_llm_chain.get_relevant_documents(\n",
|
||||
" \"Hi I'm Lance. What is Maximum Inner Product Search?\"\n",
|
||||
")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user