mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-04 08:10:25 +00:00
Compare commits
37 Commits
bagatur/do
...
bagatur/rf
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2139546565 | ||
|
|
bee3435982 | ||
|
|
611f18c944 | ||
|
|
d5aa277b94 | ||
|
|
9e1ed17bfb | ||
|
|
97411e998f | ||
|
|
6d299a55c0 | ||
|
|
e6240fecab | ||
|
|
38523d7c57 | ||
|
|
2895ca87cf | ||
|
|
ee708739c3 | ||
|
|
18411c379c | ||
|
|
9c871f427b | ||
|
|
a06db53c37 | ||
|
|
21a1538949 | ||
|
|
45f49ca439 | ||
|
|
c425e6f740 | ||
|
|
65980c22b8 | ||
|
|
e182d630f7 | ||
|
|
6432494f9d | ||
|
|
79124fd71d | ||
|
|
20abe24819 | ||
|
|
a1d7f2b3e1 | ||
|
|
feb41c5e28 | ||
|
|
85a4594ed7 | ||
|
|
33dccf0f66 | ||
|
|
942071bf57 | ||
|
|
0c95f3a981 | ||
|
|
323941a90a | ||
|
|
3e0cd11f51 | ||
|
|
70b6315b23 | ||
|
|
656e87beb9 | ||
|
|
04a5a37e92 | ||
|
|
ae67ba4dbb | ||
|
|
91ec9da534 | ||
|
|
7be72e1103 | ||
|
|
ee5bd986de |
143
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
143
.github/ISSUE_TEMPLATE/bug-report.yml
vendored
@@ -5,60 +5,84 @@ body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: >
|
||||
Thank you for taking the time to file a bug report. Before creating a new
|
||||
issue, please make sure to take a few moments to check the issue tracker
|
||||
for existing issues about the bug.
|
||||
Thank you for taking the time to file a bug report.
|
||||
|
||||
Relevant links to check before filing a bug report to see if your issue has already been reported, fixed or
|
||||
if there's another way to solve your problem:
|
||||
|
||||
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
|
||||
[API Reference](https://api.python.langchain.com/en/stable/),
|
||||
[GitHub search](https://github.com/langchain-ai/langchain),
|
||||
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
|
||||
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue)
|
||||
- type: checkboxes
|
||||
id: checks
|
||||
attributes:
|
||||
label: Checked other resources
|
||||
description: Please confirm and check all the following options.
|
||||
options:
|
||||
- label: I added a very descriptive title to this issue.
|
||||
required: true
|
||||
- label: I searched the LangChain documentation with the integrated search.
|
||||
required: true
|
||||
- label: I used the GitHub search to find a similar question and didn't find it.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: reproduction
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Example Code
|
||||
description: |
|
||||
Please add a self-contained, [minimal, reproducible, example](https://stackoverflow.com/help/minimal-reproducible-example) with your use case.
|
||||
|
||||
If a maintainer can copy it, run it, and see it right away, there's a much higher chance that you'll be able to get help.
|
||||
|
||||
If you're including an error message, please include the full stack trace not just the last error.
|
||||
|
||||
**Important!** Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
|
||||
Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
|
||||
placeholder: |
|
||||
The following code:
|
||||
|
||||
```python
|
||||
from langchain_core.runnables import RunnableLambda
|
||||
|
||||
def bad_code(inputs) -> int:
|
||||
raise NotImplementedError('For demo purpose')
|
||||
|
||||
chain = RunnableLambda(bad_code)
|
||||
chain.invoke('Hello!')
|
||||
```
|
||||
|
||||
Include both the error and the full stack trace if reporting an exception!
|
||||
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: |
|
||||
What is the problem, question, or error?
|
||||
|
||||
Write a short description telling what you are doing, what you expect to happen, and what is currently happening.
|
||||
placeholder: |
|
||||
* I'm trying to use the `langchain` library to do X.
|
||||
* I expect to see Y.
|
||||
* Instead, it does Z.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: system-info
|
||||
attributes:
|
||||
label: System Info
|
||||
description: Please share your system info with us.
|
||||
placeholder: LangChain version, platform, python version, ...
|
||||
placeholder: |
|
||||
"pip freeze | grep langchain"
|
||||
platform
|
||||
python version
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: who-can-help
|
||||
attributes:
|
||||
label: Who can help?
|
||||
description: |
|
||||
Your issue will be replied to more quickly if you can figure out the right person to tag with @
|
||||
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
|
||||
|
||||
The core maintainers strive to read all issues, but tagging them will help them prioritize.
|
||||
|
||||
Please tag fewer than 3 people.
|
||||
|
||||
@hwchase17 - project lead
|
||||
|
||||
Tracing / Callbacks
|
||||
- @agola11
|
||||
|
||||
Async
|
||||
- @agola11
|
||||
|
||||
DataLoader Abstractions
|
||||
- @eyurtsev
|
||||
|
||||
LLM/Chat Wrappers
|
||||
- @hwchase17
|
||||
- @agola11
|
||||
|
||||
Tools / Toolkits
|
||||
- ...
|
||||
|
||||
placeholder: "@Username ..."
|
||||
|
||||
- type: checkboxes
|
||||
id: information-scripts-examples
|
||||
attributes:
|
||||
label: Information
|
||||
description: "The problem arises when using:"
|
||||
options:
|
||||
- label: "The official example notebooks/scripts"
|
||||
- label: "My own modified scripts"
|
||||
|
||||
- type: checkboxes
|
||||
id: related-components
|
||||
attributes:
|
||||
@@ -77,30 +101,3 @@ body:
|
||||
- label: "Chains"
|
||||
- label: "Callbacks/Tracing"
|
||||
- label: "Async"
|
||||
|
||||
- type: textarea
|
||||
id: reproduction
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Reproduction
|
||||
description: |
|
||||
Please provide a [code sample](https://stackoverflow.com/help/minimal-reproducible-example) that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
|
||||
If you have code snippets, error messages, stack traces please provide them here as well.
|
||||
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
|
||||
Avoid screenshots when possible, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.
|
||||
|
||||
placeholder: |
|
||||
Steps to reproduce the behavior:
|
||||
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
|
||||
- type: textarea
|
||||
id: expected-behavior
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: Expected behavior
|
||||
description: "A clear and concise description of what you would expect to happen."
|
||||
|
||||
18
.github/ISSUE_TEMPLATE/other.yml
vendored
18
.github/ISSUE_TEMPLATE/other.yml
vendored
@@ -1,18 +0,0 @@
|
||||
name: Other Issue
|
||||
description: Raise an issue that wouldn't be covered by the other templates.
|
||||
title: "Issue: <Please write a comprehensive title after the 'Issue: ' prefix>"
|
||||
labels: [04 - Other]
|
||||
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Issue you'd like to raise."
|
||||
description: >
|
||||
Please describe the issue you'd like to raise as clearly as possible.
|
||||
Make sure to include any relevant links or references.
|
||||
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: "Suggestion:"
|
||||
description: >
|
||||
Please outline a suggestion to improve the issue here.
|
||||
4
.github/actions/poetry_setup/action.yml
vendored
4
.github/actions/poetry_setup/action.yml
vendored
@@ -28,6 +28,7 @@ runs:
|
||||
steps:
|
||||
- uses: actions/setup-python@v5
|
||||
name: Setup python ${{ inputs.python-version }}
|
||||
id: setup-python
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
|
||||
@@ -74,7 +75,8 @@ runs:
|
||||
env:
|
||||
POETRY_VERSION: ${{ inputs.poetry-version }}
|
||||
PYTHON_VERSION: ${{ inputs.python-version }}
|
||||
run: pipx install "poetry==$POETRY_VERSION" --python "python$PYTHON_VERSION" --verbose
|
||||
# Install poetry using the python version installed by setup-python step.
|
||||
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
|
||||
|
||||
- name: Restore pip and poetry cached dependencies
|
||||
uses: actions/cache@v3
|
||||
|
||||
156
cookbook/together_ai.ipynb
Normal file
156
cookbook/together_ai.ipynb
Normal file
@@ -0,0 +1,156 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "0fc0309d-4d49-4bb5-bec0-bd92c6fddb28",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Together AI + RAG\n",
|
||||
" \n",
|
||||
"[Together AI](https://python.langchain.com/docs/integrations/llms/together) has a broad set of OSS LLMs via inference API.\n",
|
||||
"\n",
|
||||
"See [here](https://api.together.xyz/playground). We use `\"mistralai/Mixtral-8x7B-Instruct-v0.1` for RAG on the Mixtral paper.\n",
|
||||
"\n",
|
||||
"Download the paper:\n",
|
||||
"https://arxiv.org/pdf/2401.04088.pdf"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d12fb75a-f707-48d5-82a5-efe2d041813c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"! pip install --quiet pypdf chromadb tiktoken openai langchain-together"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "9ab49327-0532-4480-804c-d066c302a322",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Load\n",
|
||||
"from langchain_community.document_loaders import PyPDFLoader\n",
|
||||
"\n",
|
||||
"loader = PyPDFLoader(\"~/Desktop/mixtral.pdf\")\n",
|
||||
"data = loader.load()\n",
|
||||
"\n",
|
||||
"# Split\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)\n",
|
||||
"all_splits = text_splitter.split_documents(data)\n",
|
||||
"\n",
|
||||
"# Add to vectorDB\n",
|
||||
"from langchain_community.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain_community.vectorstores import Chroma\n",
|
||||
"\n",
|
||||
"\"\"\"\n",
|
||||
"from langchain_together.embeddings import TogetherEmbeddings\n",
|
||||
"embeddings = TogetherEmbeddings(model=\"togethercomputer/m2-bert-80M-8k-retrieval\")\n",
|
||||
"\"\"\"\n",
|
||||
"vectorstore = Chroma.from_documents(\n",
|
||||
" documents=all_splits,\n",
|
||||
" collection_name=\"rag-chroma\",\n",
|
||||
" embedding=OpenAIEmbeddings(),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"retriever = vectorstore.as_retriever()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "4efaddd9-3dbb-455c-ba54-0ad7f2d2ce0f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_core.output_parsers import StrOutputParser\n",
|
||||
"from langchain_core.prompts import ChatPromptTemplate\n",
|
||||
"from langchain_core.pydantic_v1 import BaseModel\n",
|
||||
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
|
||||
"\n",
|
||||
"# RAG prompt\n",
|
||||
"template = \"\"\"Answer the question based only on the following context:\n",
|
||||
"{context}\n",
|
||||
"\n",
|
||||
"Question: {question}\n",
|
||||
"\"\"\"\n",
|
||||
"prompt = ChatPromptTemplate.from_template(template)\n",
|
||||
"\n",
|
||||
"# LLM\n",
|
||||
"from langchain_community.llms import Together\n",
|
||||
"\n",
|
||||
"llm = Together(\n",
|
||||
" model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
|
||||
" temperature=0.0,\n",
|
||||
" max_tokens=2000,\n",
|
||||
" top_k=1,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"# RAG chain\n",
|
||||
"chain = (\n",
|
||||
" RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n",
|
||||
" | prompt\n",
|
||||
" | llm\n",
|
||||
" | StrOutputParser()\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "88b1ee51-1b0f-4ebf-bb32-e50e843f0eeb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\nAnswer: The architectural details of Mixtral are as follows:\\n- Dimension (dim): 4096\\n- Number of layers (n\\\\_layers): 32\\n- Dimension of each head (head\\\\_dim): 128\\n- Hidden dimension (hidden\\\\_dim): 14336\\n- Number of heads (n\\\\_heads): 32\\n- Number of kv heads (n\\\\_kv\\\\_heads): 8\\n- Context length (context\\\\_len): 32768\\n- Vocabulary size (vocab\\\\_size): 32000\\n- Number of experts (num\\\\_experts): 8\\n- Number of top k experts (top\\\\_k\\\\_experts): 2\\n\\nMixtral is based on a transformer architecture and uses the same modifications as described in [18], with the notable exceptions that Mixtral supports a fully dense context length of 32k tokens, and the feedforward block picks from a set of 8 distinct groups of parameters. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Mixtral is pretrained with multilingual data using a context size of 32k tokens. It either matches or exceeds the performance of Llama 2 70B and GPT-3.5, over several benchmarks. In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain.invoke(\"What are the Architectural details of Mixtral?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "755cf871-26b7-4e30-8b91-9ffd698470f4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Trace: \n",
|
||||
"\n",
|
||||
"https://smith.langchain.com/public/935fd642-06a6-4b42-98e3-6074f93115cd/r"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -182,7 +182,14 @@ You can then use a retriever to fetch only the most relevant pieces and pass tho
|
||||
In this process, we will look up relevant documents from a *Retriever* and then pass them into the prompt.
|
||||
A Retriever can be backed by anything - a SQL table, the internet, etc - but in this instance we will populate a vector store and use that as a retriever. For more information on vectorstores, see [this documentation](/docs/modules/data_connection/vectorstores).
|
||||
|
||||
First, we need to load the data that we want to index:
|
||||
First, we need to load the data that we want to index. In order to do this, we will use the WebBaseLoader. This requires installing [BeautifulSoup](https://beautiful-soup-4.readthedocs.io/en/latest/):
|
||||
|
||||
```
|
||||
```shell
|
||||
pip install beautifulsoup4
|
||||
```
|
||||
|
||||
After that, we can import and use WebBaseLoader.
|
||||
|
||||
|
||||
```python
|
||||
|
||||
@@ -201,7 +201,7 @@
|
||||
"\n",
|
||||
"* E.g., for Llama-7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
|
||||
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n",
|
||||
"* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain.llms.ollama.Ollama.html)"
|
||||
"* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -241,7 +241,7 @@
|
||||
"\n",
|
||||
"As noted above, see the [API reference](https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html?highlight=llamacpp#langchain.llms.llamacpp.LlamaCpp) for the full set of parameters. \n",
|
||||
"\n",
|
||||
"From the [llama.cpp docs](https://python.langchain.com/docs/integrations/llms/llamacpp), a few are worth commenting on:\n",
|
||||
"From the [llama.cpp API reference docs](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.llamacpp.LlamaCpp.htm), a few are worth commenting on:\n",
|
||||
"\n",
|
||||
"`n_gpu_layers`: number of layers to be loaded into GPU memory\n",
|
||||
"\n",
|
||||
@@ -378,9 +378,9 @@
|
||||
"source": [
|
||||
"### GPT4All\n",
|
||||
"\n",
|
||||
"We can use model weights downloaded from [GPT4All](https://python.langchain.com/docs/integrations/llms/gpt4all) model explorer.\n",
|
||||
"We can use model weights downloaded from [GPT4All](/docs/integrations/llms/gpt4all) model explorer.\n",
|
||||
"\n",
|
||||
"Similar to what is shown above, we can run inference and use [the API reference](https://api.python.langchain.com/en/latest/llms/langchain.llms.gpt4all.GPT4All.html?highlight=gpt4all#langchain.llms.gpt4all.GPT4All) to set parameters of interest."
|
||||
"Similar to what is shown above, we can run inference and use [the API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gpt4all.GPT4All.html) to set parameters of interest."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -390,7 +390,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install gpt4all\n"
|
||||
"%pip install gpt4all"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -582,9 +582,9 @@
|
||||
"source": [
|
||||
"## Use cases\n",
|
||||
"\n",
|
||||
"Given an `llm` created from one of the models above, you can use it for [many use cases](docs/use_cases).\n",
|
||||
"Given an `llm` created from one of the models above, you can use it for [many use cases](/docs/use_cases/).\n",
|
||||
"\n",
|
||||
"For example, here is a guide to [RAG](docs/use_cases/question_answering/local_retrieval_qa) with local LLMs.\n",
|
||||
"For example, here is a guide to [RAG](/docs/use_cases/question_answering/local_retrieval_qa) with local LLMs.\n",
|
||||
"\n",
|
||||
"In general, use cases for local LLMs can be driven by at least two factors:\n",
|
||||
"\n",
|
||||
@@ -611,7 +611,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.12"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -55,17 +55,9 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[INFO] [09-15 20:00:29] logging.py:55 [t:139698882193216]: requesting llm api endpoint: /chat/eb-instant\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"\"\"\"For basic init and call\"\"\"\n",
|
||||
"import os\n",
|
||||
@@ -126,9 +118,7 @@
|
||||
"from langchain.schema import HumanMessage\n",
|
||||
"from langchain_community.chat_models import QianfanChatEndpoint\n",
|
||||
"\n",
|
||||
"chatLLM = QianfanChatEndpoint(\n",
|
||||
" streaming=True,\n",
|
||||
")\n",
|
||||
"chatLLM = QianfanChatEndpoint()\n",
|
||||
"res = chatLLM.stream([HumanMessage(content=\"hi\")], streaming=True)\n",
|
||||
"for r in res:\n",
|
||||
" print(\"chat resp:\", r)\n",
|
||||
@@ -260,11 +250,11 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.4"
|
||||
"version": "3.11.5"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "6fa70026b407ae751a5c9e6bd7f7d482379da8ad616f98512780b705c84ee157"
|
||||
"hash": "58f7cb64c3a06383b7f18d2a11305edccbad427293a2b4afa7abe8bfc810d4bb"
|
||||
}
|
||||
}
|
||||
},
|
||||
|
||||
@@ -127,10 +127,8 @@
|
||||
"Setup the user id and app id where the model resides. You can find a list of public models on https://clarifai.com/explore/models\n",
|
||||
"\n",
|
||||
"You will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task.\n",
|
||||
"\n",
|
||||
" or\n",
|
||||
" \n",
|
||||
"You can use the model_url (for ex: \"https://clarifai.com/anthropic/completion/models/claude-v2\") for intialization."
|
||||
" \n",
|
||||
"Alternatively, You can use the model_url (for ex: \"https://clarifai.com/anthropic/completion/models/claude-v2\") for intialization."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -18,7 +18,17 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": null,
|
||||
"id": "1ecdb29d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install --upgrade --quiet langchain-together"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "e7b7170d-d7c5-4890-9714-a37238343805",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -37,7 +47,7 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_community.llms import Together\n",
|
||||
"from langchain_together import Together\n",
|
||||
"\n",
|
||||
"llm = Together(\n",
|
||||
" model=\"togethercomputer/RedPajama-INCITE-7B-Base\",\n",
|
||||
@@ -51,15 +61,15 @@
|
||||
"You provide succinct and accurate answers. Answer the following question: \n",
|
||||
"\n",
|
||||
"What is a large language model?\"\"\"\n",
|
||||
"print(llm(input_))"
|
||||
"print(llm.invoke(input_))"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv",
|
||||
"display_name": ".venv",
|
||||
"language": "python",
|
||||
"name": "poetry-venv"
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
@@ -71,7 +81,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.11.4"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -13,15 +13,12 @@ All functionality related to OpenAI
|
||||
>[ChatGPT](https://chat.openai.com) is the Artificial Intelligence (AI) chatbot developed by `OpenAI`.
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with
|
||||
|
||||
- Install the LangChain partner package
|
||||
```bash
|
||||
pip install openai
|
||||
pip install langchain-openai
|
||||
```
|
||||
- Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
|
||||
- If you want to use OpenAI's tokenizer (only available for Python 3.9+), install it
|
||||
```bash
|
||||
pip install tiktoken
|
||||
```
|
||||
|
||||
|
||||
## LLM
|
||||
|
||||
@@ -58,10 +58,8 @@ print(rag.get_relevant_documents("What is cohere ai?"))
|
||||
### Text Embedding
|
||||
|
||||
```python
|
||||
from langchain_community.chat_models import ChatCohere
|
||||
from langchain.retrievers import CohereRagRetriever
|
||||
from langchain_core.documents import Document
|
||||
from langchain_community.embeddings import CohereEmbeddings
|
||||
|
||||
rag = CohereRagRetriever(llm=ChatCohere())
|
||||
print(rag.get_relevant_documents("What is cohere ai?"))
|
||||
embeddings = CohereEmbeddings(model="embed-english-light-v3.0")
|
||||
print(embeddings.embed_documents(["This is a test document."]))
|
||||
```
|
||||
|
||||
1183
docs/docs/integrations/providers/dspy.ipynb
Normal file
1183
docs/docs/integrations/providers/dspy.ipynb
Normal file
File diff suppressed because it is too large
Load Diff
266
docs/docs/integrations/providers/ragatouille.ipynb
Normal file
266
docs/docs/integrations/providers/ragatouille.ipynb
Normal file
@@ -0,0 +1,266 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d19521dc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# RAGatouille\n",
|
||||
"\n",
|
||||
"[RAGatouille](https://github.com/bclavie/RAGatouille) makes it as simple as can be to use ColBERT! [ColBERT](https://github.com/stanford-futuredata/ColBERT) is a fast and accurate retrieval model, enabling scalable BERT-based search over large text collections in tens of milliseconds.\n",
|
||||
"\n",
|
||||
"There are multiple ways that we can use RAGatouille.\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"## Setup\n",
|
||||
"\n",
|
||||
"The integration lives in the `ragatouille` package.\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"pip install -U ragatouille\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "00de63d0",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"[Jan 10, 10:53:28] Loading segmented_maxsim_cpp extension (set COLBERT_LOAD_TORCH_EXTENSION_VERBOSE=True for more info)...\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:125: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.\n",
|
||||
" warnings.warn(\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from ragatouille import RAGPretrainedModel\n",
|
||||
"\n",
|
||||
"RAG = RAGPretrainedModel.from_pretrained(\"colbert-ir/colbertv2.0\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "59d069ef",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Retriever\n",
|
||||
"\n",
|
||||
"We can use RAGatouille as a retriever. For more information on this, see the [RAGatouille Retriever](/docs/integrations/retrievers/ragatouille)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6407e18e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Document Compressor\n",
|
||||
"\n",
|
||||
"We can also use RAGatouille off-the-shelf as a reranker. This will allow us to use ColBERT to rerank retrieved results from any generic retriever. The benefits of this are that we can do this on top of any existing index, so that we don't need to create a new idex. We can do this by using the [document compressor](/docs/modules/data_connections/retrievers/contextual_compression) abstraction in LangChain."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "16d16022",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup Vanilla Retriever\n",
|
||||
"\n",
|
||||
"First, let's set up a vanilla retriever as an example."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "6ee6af64",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import requests\n",
|
||||
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
|
||||
"from langchain_community.vectorstores import FAISS\n",
|
||||
"from langchain_openai import OpenAIEmbeddings\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def get_wikipedia_page(title: str):\n",
|
||||
" \"\"\"\n",
|
||||
" Retrieve the full text content of a Wikipedia page.\n",
|
||||
"\n",
|
||||
" :param title: str - Title of the Wikipedia page.\n",
|
||||
" :return: str - Full text content of the page as raw string.\n",
|
||||
" \"\"\"\n",
|
||||
" # Wikipedia API endpoint\n",
|
||||
" URL = \"https://en.wikipedia.org/w/api.php\"\n",
|
||||
"\n",
|
||||
" # Parameters for the API request\n",
|
||||
" params = {\n",
|
||||
" \"action\": \"query\",\n",
|
||||
" \"format\": \"json\",\n",
|
||||
" \"titles\": title,\n",
|
||||
" \"prop\": \"extracts\",\n",
|
||||
" \"explaintext\": True,\n",
|
||||
" }\n",
|
||||
"\n",
|
||||
" # Custom User-Agent header to comply with Wikipedia's best practices\n",
|
||||
" headers = {\"User-Agent\": \"RAGatouille_tutorial/0.0.1 (ben@clavie.eu)\"}\n",
|
||||
"\n",
|
||||
" response = requests.get(URL, params=params, headers=headers)\n",
|
||||
" data = response.json()\n",
|
||||
"\n",
|
||||
" # Extracting page content\n",
|
||||
" page = next(iter(data[\"query\"][\"pages\"].values()))\n",
|
||||
" return page[\"extract\"] if \"extract\" in page else None\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"text = get_wikipedia_page(\"Hayao_Miyazaki\")\n",
|
||||
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
|
||||
"texts = text_splitter.create_documents([text])"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "22b9dbf7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever(\n",
|
||||
" search_kwargs={\"k\": 10}\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "50f54a7d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content='collaborative projects. In April 1984, Miyazaki opened his own office in Suginami Ward, naming it Nibariki.')"
|
||||
]
|
||||
},
|
||||
"execution_count": 17,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"docs = retriever.invoke(\"What animation studio did Miyazaki found\")\n",
|
||||
"docs[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ef72bb50",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can see that the result isn't super relevant to the question asked"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6c09c803",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using ColBERT as a reranker"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "9653b742",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"/Users/harrisonchase/.pyenv/versions/3.10.1/envs/langchain/lib/python3.10/site-packages/torch/amp/autocast_mode.py:250: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling\n",
|
||||
" warnings.warn(\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.retrievers import ContextualCompressionRetriever\n",
|
||||
"\n",
|
||||
"compression_retriever = ContextualCompressionRetriever(\n",
|
||||
" base_compressor=RAG.as_langchain_document_compressor(), base_retriever=retriever\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"compressed_docs = compression_retriever.get_relevant_documents(\n",
|
||||
" \"What animation studio did Miyazaki found\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "35aaceee",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content='In June 1985, Miyazaki, Takahata, Tokuma and Suzuki founded the animation production company Studio Ghibli, with funding from Tokuma Shoten. Studio Ghibli\\'s first film, Laputa: Castle in the Sky (1986), employed the same production crew of Nausicaä. Miyazaki\\'s designs for the film\\'s setting were inspired by Greek architecture and \"European urbanistic templates\". Some of the architecture in the film was also inspired by a Welsh mining town; Miyazaki witnessed the mining strike upon his first', metadata={'relevance_score': 26.5194149017334})"
|
||||
]
|
||||
},
|
||||
"execution_count": 19,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"compressed_docs[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6c2bbefc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This answer is much more relevant!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3746b734",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -50,20 +50,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": null,
|
||||
"id": "3f5dc9d7-65e3-4b5b-9086-3327d016cfe0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" ········\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Please login and get your API key from https://clarifai.com/settings/security\n",
|
||||
"from getpass import getpass\n",
|
||||
|
||||
@@ -720,7 +720,7 @@
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Customise the Query\n",
|
||||
"With `custom_query` parameter at search, you are able to adjust the query that is used to retrieve documents from Elasticsearch. This is useful if you want to want to use a more complex query, to support linear boosting of fields."
|
||||
"With `custom_query` parameter at search, you are able to adjust the query that is used to retrieve documents from Elasticsearch. This is useful if you want to use a more complex query, to support linear boosting of fields."
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -44,6 +44,7 @@ LangChain offers many different types of text splitters. Below is a table listin
|
||||
| Code | Code (Python, JS) specific characters | | Splits text based on characters specific to coding languages. 15 different languages are available to choose from. |
|
||||
| Token | Tokens | | Splits text on tokens. There exist a few different ways to measure tokens. |
|
||||
| Character | A user defined character | | Splits text based on a user defined character. One of the simpler methods. |
|
||||
| [Experimental] Semantic Chunker | Sentences | | First splits on sentences. Then combines ones next to each other if they are semantically similar enough. Taken from [Greg Kamradt](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/5_Levels_Of_Text_Splitting.ipynb) |
|
||||
|
||||
|
||||
## Evaluate text splitters
|
||||
|
||||
@@ -0,0 +1,145 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c3ee8d00",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Semantic Chunking\n",
|
||||
"\n",
|
||||
"Splits the text based on semantic similarity.\n",
|
||||
"\n",
|
||||
"Taken from Greg Kamradt's wonderful notebook:\n",
|
||||
"https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/5_Levels_Of_Text_Splitting.ipynb\n",
|
||||
"\n",
|
||||
"All credit to him.\n",
|
||||
"\n",
|
||||
"At a high level, this splits into sentences, then groups into groups of 3\n",
|
||||
"sentences, and then merges one that are similar in the embedding space."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "542f4427",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Install Dependencies"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "d8c58769",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install --quiet langchain_experimental langchain_openai"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c20cdf54",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Load Example Data"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "313fb032",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This is a long document we can split up.\n",
|
||||
"with open(\"../../state_of_the_union.txt\") as f:\n",
|
||||
" state_of_the_union = f.read()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f7436e15",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Create Text Splitter"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "a88ff70c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_experimental.text_splitter import SemanticChunker\n",
|
||||
"from langchain_openai.embeddings import OpenAIEmbeddings"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "613d4a3b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"text_splitter = SemanticChunker(OpenAIEmbeddings())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "91b14834",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Split Text"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "295ec095",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. Last year COVID-19 kept us apart. This year we are finally together again. Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. With a duty to one another to the American people to the Constitution. And with an unwavering resolve that freedom will always triumph over tyranny. Six days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. He met the Ukrainian people. From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight. Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. Throughout our history we’ve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. They keep moving.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"docs = text_splitter.create_documents([state_of_the_union])\n",
|
||||
"print(docs[0].page_content)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a9a3b9cd",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -1,3 +1,2 @@
|
||||
label: 'Q&A over structured data'
|
||||
collapsed: false
|
||||
position: 0.1
|
||||
|
||||
@@ -399,24 +399,6 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"**Improvements**\n",
|
||||
"\n",
|
||||
"The performance of the `SQLDatabaseChain` can be enhanced in several ways:\n",
|
||||
"\n",
|
||||
"- [Adding sample rows](#adding-sample-rows)\n",
|
||||
"- [Specifying custom table information](/docs/integrations/tools/sqlite#custom-table-info)\n",
|
||||
"- [Using Query Checker](/docs/integrations/tools/sqlite#use-query-checker) self-correct invalid SQL using parameter `use_query_checker=True`\n",
|
||||
"- [Customizing the LLM Prompt](/docs/integrations/tools/sqlite#customize-prompt) include specific instructions or relevant information, using parameter `prompt=CUSTOM_PROMPT`\n",
|
||||
"- [Get intermediate steps](/docs/integrations/tools/sqlite#return-intermediate-steps) access the SQL statement as well as the final result using parameter `return_intermediate_steps=True`\n",
|
||||
"- [Limit the number of rows](/docs/integrations/tools/sqlite#choosing-how-to-limit-the-number-of-rows-returned) a query will return using parameter `top_k=5`\n",
|
||||
"\n",
|
||||
"You might find [SQLDatabaseSequentialChain](/docs/integrations/tools/sqlite#sqldatabasesequentialchain)\n",
|
||||
"useful for cases in which the number of tables in the database is large.\n",
|
||||
"\n",
|
||||
"This `Sequential Chain` handles the process of:\n",
|
||||
"\n",
|
||||
"1. Determining which tables to use based on the user question\n",
|
||||
"2. Calling the normal SQL database chain using only relevant tables\n",
|
||||
"\n",
|
||||
"**Adding Sample Rows**\n",
|
||||
"\n",
|
||||
@@ -1269,7 +1251,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
"\n",
|
||||
"LangChain has [integrations](https://integrations.langchain.com/) with many open-source LLMs that can be run locally.\n",
|
||||
"\n",
|
||||
"See [here](docs/guides/local_llms) for setup instructions for these LLMs. \n",
|
||||
"See [here](/docs/guides/local_llms) for setup instructions for these LLMs. \n",
|
||||
"\n",
|
||||
"For example, here we show how to run `GPT4All` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM.\n",
|
||||
"\n",
|
||||
@@ -141,11 +141,11 @@
|
||||
"\n",
|
||||
"Note: new versions of `llama-cpp-python` use GGUF model files (see [here](https://github.com/abetlen/llama-cpp-python/pull/633)).\n",
|
||||
"\n",
|
||||
"If you have an existing GGML model, see [here](docs/integrations/llms/llamacpp) for instructions for conversion for GGUF. \n",
|
||||
"If you have an existing GGML model, see [here](/docs/integrations/llms/llamacpp) for instructions for conversion for GGUF. \n",
|
||||
" \n",
|
||||
"And / or, you can download a GGUF converted model (e.g., [here](https://huggingface.co/TheBloke)).\n",
|
||||
"\n",
|
||||
"Finally, as noted in detail [here](docs/guides/local_llms) install `llama-cpp-python`"
|
||||
"Finally, as noted in detail [here](/docs/guides/local_llms) install `llama-cpp-python`"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -201,7 +201,7 @@
|
||||
"id": "fcf81052",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Setting model parameters as noted in the [llama.cpp docs](https://python.langchain.com/docs/integrations/llms/llamacpp)."
|
||||
"Setting model parameters as noted in the [llama.cpp docs](/docs/integrations/llms/llamacpp)."
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -230,7 +230,7 @@
|
||||
"id": "3831b16a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Note that these indicate that [Metal was enabled properly](https://python.langchain.com/docs/integrations/llms/llamacpp):\n",
|
||||
"Note that these indicate that [Metal was enabled properly](/docs/integrations/llms/llamacpp):\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"ggml_metal_init: allocating\n",
|
||||
@@ -304,7 +304,7 @@
|
||||
"\n",
|
||||
"Similarly, we can use `GPT4All`.\n",
|
||||
"\n",
|
||||
"[Download the GPT4All model binary](https://python.langchain.com/docs/integrations/llms/gpt4all).\n",
|
||||
"[Download the GPT4All model binary](/docs/integrations/llms/gpt4all).\n",
|
||||
"\n",
|
||||
"The Model Explorer on the [GPT4All](https://gpt4all.io/index.html) is a great way to choose and download a model.\n",
|
||||
"\n",
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
"\n",
|
||||
"**Step 2: Add that parameter as a configurable field for the chain**\n",
|
||||
"\n",
|
||||
"This will let you easily call the chain and configure any relevant flags at runtime. See [this documentation](docs/expression_language/how_to/configure) for more information on configuration.\n",
|
||||
"This will let you easily call the chain and configure any relevant flags at runtime. See [this documentation](/docs/expression_language/how_to/configure) for more information on configuration.\n",
|
||||
"\n",
|
||||
"**Step 3: Call the chain with that configurable field**\n",
|
||||
"\n",
|
||||
@@ -298,14 +298,6 @@
|
||||
" config={\"configurable\": {\"search_kwargs\": {\"namespace\": \"ankush\"}}},\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e3aa0b9e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -55,7 +55,7 @@
|
||||
"\n",
|
||||
"#### Retrieval and generation\n",
|
||||
"4. **Retrieve**: Given a user input, relevant splits are retrieved from storage using a [Retriever](/docs/modules/data_connection/retrievers/).\n",
|
||||
"5. **Generate**: A [ChatModel](/docs/modules/model_io/chat) / [LLM](/docs/modules/model_io/llms/) produces an answer using a prompt that includes the question and the retrieved data"
|
||||
"5. **Generate**: A [ChatModel](/docs/modules/model_io/chat/) / [LLM](/docs/modules/model_io/llms/) produces an answer using a prompt that includes the question and the retrieved data"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -449,7 +449,7 @@
|
||||
"`TextSplitter`: Object that splits a list of `Document`s into smaller chunks. Subclass of `DocumentTransformer`s.\n",
|
||||
"- Explore `Context-aware splitters`, which keep the location (\"context\") of each split in the original `Document`:\n",
|
||||
" - [Markdown files](/docs/modules/data_connection/document_transformers/markdown_header_metadata)\n",
|
||||
" - [Code (py or js)](docs/integrations/document_loaders/source_code)\n",
|
||||
" - [Code (py or js)](/docs/integrations/document_loaders/source_code)\n",
|
||||
" - [Scientific papers](/docs/integrations/document_loaders/grobid)\n",
|
||||
"- [Interface](https://api.python.langchain.com/en/latest/text_splitter/langchain.text_splitter.TextSplitter.html): API reference for the base interface.\n",
|
||||
"\n",
|
||||
@@ -865,7 +865,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.10.1"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1422,7 +1422,7 @@
|
||||
},
|
||||
{
|
||||
"source": "/docs/integrations/tools/sqlite",
|
||||
"destination": "/docs/use_cases/qa_structured/sqlite"
|
||||
"destination": "/docs/use_cases/qa_structured/sql"
|
||||
},
|
||||
{
|
||||
"source": "/en/latest/modules/callbacks/filecallbackhandler.html",
|
||||
|
||||
@@ -18,7 +18,9 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@deprecated(
|
||||
since="0.1.0", removal="0.2.0", alternative="langchain_openai.AzureChatOpenAI"
|
||||
since="0.0.10",
|
||||
removal="0.2.0",
|
||||
alternative_import="langchain_openai.AzureChatOpenAI",
|
||||
)
|
||||
class AzureChatOpenAI(ChatOpenAI):
|
||||
"""`Azure OpenAI` Chat Completion API.
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any, AsyncIterator, Dict, Iterator, List, Mapping, Optional, cast
|
||||
|
||||
@@ -244,7 +242,14 @@ class QianfanChatEndpoint(BaseChatModel):
|
||||
"""
|
||||
if self.streaming:
|
||||
completion = ""
|
||||
token_usage = {}
|
||||
chat_generation_info: Dict = {}
|
||||
for chunk in self._stream(messages, stop, run_manager, **kwargs):
|
||||
chat_generation_info = (
|
||||
chunk.generation_info
|
||||
if chunk.generation_info is not None
|
||||
else chat_generation_info
|
||||
)
|
||||
completion += chunk.text
|
||||
lc_msg = AIMessage(content=completion, additional_kwargs={})
|
||||
gen = ChatGeneration(
|
||||
@@ -253,7 +258,10 @@ class QianfanChatEndpoint(BaseChatModel):
|
||||
)
|
||||
return ChatResult(
|
||||
generations=[gen],
|
||||
llm_output={"token_usage": {}, "model_name": self.model},
|
||||
llm_output={
|
||||
"token_usage": chat_generation_info.get("usage", {}),
|
||||
"model_name": self.model,
|
||||
},
|
||||
)
|
||||
params = self._convert_prompt_msg_params(messages, **kwargs)
|
||||
response_payload = self.client.do(**params)
|
||||
@@ -279,7 +287,13 @@ class QianfanChatEndpoint(BaseChatModel):
|
||||
if self.streaming:
|
||||
completion = ""
|
||||
token_usage = {}
|
||||
chat_generation_info: Dict = {}
|
||||
async for chunk in self._astream(messages, stop, run_manager, **kwargs):
|
||||
chat_generation_info = (
|
||||
chunk.generation_info
|
||||
if chunk.generation_info is not None
|
||||
else chat_generation_info
|
||||
)
|
||||
completion += chunk.text
|
||||
|
||||
lc_msg = AIMessage(content=completion, additional_kwargs={})
|
||||
@@ -289,7 +303,10 @@ class QianfanChatEndpoint(BaseChatModel):
|
||||
)
|
||||
return ChatResult(
|
||||
generations=[gen],
|
||||
llm_output={"token_usage": {}, "model_name": self.model},
|
||||
llm_output={
|
||||
"token_usage": chat_generation_info.get("usage", {}),
|
||||
"model_name": self.model,
|
||||
},
|
||||
)
|
||||
params = self._convert_prompt_msg_params(messages, **kwargs)
|
||||
response_payload = await self.client.ado(**params)
|
||||
@@ -315,16 +332,19 @@ class QianfanChatEndpoint(BaseChatModel):
|
||||
**kwargs: Any,
|
||||
) -> Iterator[ChatGenerationChunk]:
|
||||
params = self._convert_prompt_msg_params(messages, **kwargs)
|
||||
params["stream"] = True
|
||||
for res in self.client.do(**params):
|
||||
if res:
|
||||
msg = _convert_dict_to_message(res)
|
||||
additional_kwargs = msg.additional_kwargs.get("function_call", {})
|
||||
chunk = ChatGenerationChunk(
|
||||
text=res["result"],
|
||||
message=AIMessageChunk(
|
||||
content=msg.content,
|
||||
role="assistant",
|
||||
additional_kwargs=msg.additional_kwargs,
|
||||
additional_kwargs=additional_kwargs,
|
||||
),
|
||||
generation_info=msg.additional_kwargs,
|
||||
)
|
||||
yield chunk
|
||||
if run_manager:
|
||||
@@ -338,16 +358,19 @@ class QianfanChatEndpoint(BaseChatModel):
|
||||
**kwargs: Any,
|
||||
) -> AsyncIterator[ChatGenerationChunk]:
|
||||
params = self._convert_prompt_msg_params(messages, **kwargs)
|
||||
params["stream"] = True
|
||||
async for res in await self.client.ado(**params):
|
||||
if res:
|
||||
msg = _convert_dict_to_message(res)
|
||||
additional_kwargs = msg.additional_kwargs.get("function_call", {})
|
||||
chunk = ChatGenerationChunk(
|
||||
text=res["result"],
|
||||
message=AIMessageChunk(
|
||||
content=msg.content,
|
||||
role="assistant",
|
||||
additional_kwargs=msg.additional_kwargs,
|
||||
additional_kwargs=additional_kwargs,
|
||||
),
|
||||
generation_info=msg.additional_kwargs,
|
||||
)
|
||||
yield chunk
|
||||
if run_manager:
|
||||
|
||||
@@ -144,7 +144,9 @@ def _convert_delta_to_message_chunk(
|
||||
return default_class(content=content)
|
||||
|
||||
|
||||
@deprecated(since="0.1.0", removal="0.2.0", alternative="langchain_openai.ChatOpenAI")
|
||||
@deprecated(
|
||||
since="0.0.10", removal="0.2.0", alternative_import="langchain_openai.ChatOpenAI"
|
||||
)
|
||||
class ChatOpenAI(BaseChatModel):
|
||||
"""`OpenAI` Chat large language models API.
|
||||
|
||||
|
||||
@@ -9,6 +9,7 @@ from typing import TYPE_CHECKING, Any, Dict, Iterator, List, Optional, Union, ca
|
||||
from urllib.parse import urlparse
|
||||
|
||||
import requests
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.callbacks import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
CallbackManagerForLLMRun,
|
||||
@@ -203,6 +204,11 @@ def _get_question(messages: List[BaseMessage]) -> HumanMessage:
|
||||
return question
|
||||
|
||||
|
||||
@deprecated(
|
||||
since="0.0.12",
|
||||
removal="0.2.0",
|
||||
alternative_import="langchain_google_vertexai.ChatVertexAI",
|
||||
)
|
||||
class ChatVertexAI(_VertexAICommon, BaseChatModel):
|
||||
"""`Vertex AI` Chat large language models API."""
|
||||
|
||||
|
||||
@@ -14,7 +14,9 @@ from langchain_community.utils.openai import is_openai_v1
|
||||
|
||||
|
||||
@deprecated(
|
||||
since="0.1.0", removal="0.2.0", alternative="langchain_openai.AzureOpenAIEmbeddings"
|
||||
since="0.1.0",
|
||||
removal="0.2.0",
|
||||
alternative_import="langchain_openai.AzureOpenAIEmbeddings",
|
||||
)
|
||||
class AzureOpenAIEmbeddings(OpenAIEmbeddings):
|
||||
"""`Azure OpenAI` Embeddings API."""
|
||||
|
||||
@@ -23,7 +23,8 @@ class ClarifaiEmbeddings(BaseModel, Embeddings):
|
||||
app_id=APP_ID,
|
||||
model_id=MODEL_ID)
|
||||
(or)
|
||||
clarifai_llm = Clarifai(model_url=EXAMPLE_URL)
|
||||
Example_URL = "https://clarifai.com/clarifai/main/models/BAAI-bge-base-en-v15"
|
||||
clarifai = ClarifaiEmbeddings(model_url=EXAMPLE_URL)
|
||||
"""
|
||||
|
||||
model_url: Optional[str] = None
|
||||
|
||||
@@ -141,7 +141,7 @@ async def async_embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) ->
|
||||
@deprecated(
|
||||
since="0.1.0",
|
||||
removal="0.2.0",
|
||||
alternative="langchain_openai.OpenAIEmbeddings",
|
||||
alternative_import="langchain_openai.OpenAIEmbeddings",
|
||||
)
|
||||
class OpenAIEmbeddings(BaseModel, Embeddings):
|
||||
"""OpenAI embedding models.
|
||||
|
||||
@@ -5,6 +5,7 @@ import threading
|
||||
from concurrent.futures import ThreadPoolExecutor, wait
|
||||
from typing import Any, Dict, List, Literal, Optional, Tuple
|
||||
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.embeddings import Embeddings
|
||||
from langchain_core.language_models.llms import create_base_retry_decorator
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
@@ -19,6 +20,11 @@ _MAX_BATCH_SIZE = 250
|
||||
_MIN_BATCH_SIZE = 5
|
||||
|
||||
|
||||
@deprecated(
|
||||
since="0.0.12",
|
||||
removal="0.2.0",
|
||||
alternative_import="langchain_google_vertexai.VertexAIEmbeddings",
|
||||
)
|
||||
class VertexAIEmbeddings(_VertexAICommon, Embeddings):
|
||||
"""Google Cloud VertexAI embedding models."""
|
||||
|
||||
|
||||
@@ -59,7 +59,7 @@ def _strip_erroneous_leading_spaces(text: str) -> str:
|
||||
return text
|
||||
|
||||
|
||||
@deprecated("0.0.351", alternative="langchain_google_genai.GoogleGenerativeAI")
|
||||
@deprecated("0.0.351", alternative_import="langchain_google_genai.GoogleGenerativeAI")
|
||||
class GooglePalm(BaseLLM, BaseModel):
|
||||
"""
|
||||
DEPRECATED: Use `langchain_google_genai.GoogleGenerativeAI` instead.
|
||||
|
||||
@@ -725,7 +725,9 @@ class BaseOpenAI(BaseLLM):
|
||||
return self.max_context_size - num_tokens
|
||||
|
||||
|
||||
@deprecated(since="0.1.0", removal="0.2.0", alternative="langchain_openai.OpenAI")
|
||||
@deprecated(
|
||||
since="0.0.10", removal="0.2.0", alternative_import="langchain_openai.OpenAI"
|
||||
)
|
||||
class OpenAI(BaseOpenAI):
|
||||
"""OpenAI large language models.
|
||||
|
||||
@@ -752,7 +754,9 @@ class OpenAI(BaseOpenAI):
|
||||
return {**{"model": self.model_name}, **super()._invocation_params}
|
||||
|
||||
|
||||
@deprecated(since="0.1.0", removal="0.2.0", alternative="langchain_openai.AzureOpenAI")
|
||||
@deprecated(
|
||||
since="0.0.10", removal="0.2.0", alternative_import="langchain_openai.AzureOpenAI"
|
||||
)
|
||||
class AzureOpenAI(BaseOpenAI):
|
||||
"""Azure-specific OpenAI large language models.
|
||||
|
||||
@@ -956,7 +960,11 @@ class AzureOpenAI(BaseOpenAI):
|
||||
}
|
||||
|
||||
|
||||
@deprecated(since="0.1.0", removal="0.2.0", alternative="langchain_openai.ChatOpenAI")
|
||||
@deprecated(
|
||||
since="0.0.1",
|
||||
removal="0.2.0",
|
||||
alternative_import="langchain_openai.ChatOpenAI",
|
||||
)
|
||||
class OpenAIChat(BaseLLM):
|
||||
"""OpenAI Chat large language models.
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ import logging
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from aiohttp import ClientSession
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.callbacks import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
CallbackManagerForLLMRun,
|
||||
@@ -16,6 +17,9 @@ from langchain_community.utilities.requests import Requests
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@deprecated(
|
||||
since="0.0.12", removal="0.2", alternative_import="langchain_together.Together"
|
||||
)
|
||||
class Together(LLM):
|
||||
"""LLM models from `Together`.
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ from __future__ import annotations
|
||||
from concurrent.futures import Executor, ThreadPoolExecutor
|
||||
from typing import TYPE_CHECKING, Any, ClassVar, Dict, Iterator, List, Optional, Union
|
||||
|
||||
from langchain_core._api.deprecation import deprecated
|
||||
from langchain_core.callbacks.manager import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
CallbackManagerForLLMRun,
|
||||
@@ -200,6 +201,11 @@ class _VertexAICommon(_VertexAIBase):
|
||||
return params
|
||||
|
||||
|
||||
@deprecated(
|
||||
since="0.0.12",
|
||||
removal="0.2.0",
|
||||
alternative_import="langchain_google_vertexai.VertexAI",
|
||||
)
|
||||
class VertexAI(_VertexAICommon, BaseLLM):
|
||||
"""Google Vertex AI large language models."""
|
||||
|
||||
@@ -385,6 +391,11 @@ class VertexAI(_VertexAICommon, BaseLLM):
|
||||
)
|
||||
|
||||
|
||||
@deprecated(
|
||||
since="0.0.12",
|
||||
removal="0.2.0",
|
||||
alternative_import="langchain_google_vertexai.VertexAIModelGarden",
|
||||
)
|
||||
class VertexAIModelGarden(_VertexAIBase, BaseLLM):
|
||||
"""Large language models served from Vertex AI Model Garden."""
|
||||
|
||||
|
||||
@@ -18,6 +18,7 @@ class MilvusRetriever(BaseRetriever):
|
||||
|
||||
embedding_function: Embeddings
|
||||
collection_name: str = "LangChainCollection"
|
||||
collection_properties: Optional[Dict[str, Any]] = None
|
||||
connection_args: Optional[Dict[str, Any]] = None
|
||||
consistency_level: str = "Session"
|
||||
search_params: Optional[dict] = None
|
||||
@@ -31,6 +32,7 @@ class MilvusRetriever(BaseRetriever):
|
||||
values["store"] = Milvus(
|
||||
values["embedding_function"],
|
||||
values["collection_name"],
|
||||
values["collection_properties"],
|
||||
values["connection_args"],
|
||||
values["consistency_level"],
|
||||
)
|
||||
|
||||
@@ -24,10 +24,12 @@ class Clarifai(VectorStore):
|
||||
.. code-block:: python
|
||||
|
||||
from langchain_community.vectorstores import Clarifai
|
||||
from langchain_community.embeddings.openai import OpenAIEmbeddings
|
||||
|
||||
embeddings = OpenAIEmbeddings()
|
||||
vectorstore = Clarifai("langchain_store", embeddings.embed_query)
|
||||
clarifai_vector_db = Clarifai(
|
||||
user_id=USER_ID,
|
||||
app_id=APP_ID,
|
||||
number_of_docs=NUMBER_OF_DOCS,
|
||||
)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
|
||||
@@ -42,6 +42,10 @@ class Milvus(VectorStore):
|
||||
"LangChainCollection".
|
||||
collection_description (str): The description of the collection. Defaults to
|
||||
"".
|
||||
collection_properties (Optional[dict[str, any]]): The collection properties.
|
||||
Defaults to None.
|
||||
If set, will override collection existing properties.
|
||||
For example: {"collection.ttl.seconds": 60}.
|
||||
connection_args (Optional[dict[str, any]]): The connection args used for
|
||||
this class comes in the form of a dict.
|
||||
consistency_level (str): The consistency level to use for a collection.
|
||||
@@ -109,6 +113,7 @@ class Milvus(VectorStore):
|
||||
embedding_function: Embeddings,
|
||||
collection_name: str = "LangChainCollection",
|
||||
collection_description: str = "",
|
||||
collection_properties: Optional[dict[str, Any]] = None,
|
||||
connection_args: Optional[dict[str, Any]] = None,
|
||||
consistency_level: str = "Session",
|
||||
index_params: Optional[dict] = None,
|
||||
@@ -149,6 +154,7 @@ class Milvus(VectorStore):
|
||||
self.embedding_func = embedding_function
|
||||
self.collection_name = collection_name
|
||||
self.collection_description = collection_description
|
||||
self.collection_properties = collection_properties
|
||||
self.index_params = index_params
|
||||
self.search_params = search_params
|
||||
self.consistency_level = consistency_level
|
||||
@@ -177,6 +183,8 @@ class Milvus(VectorStore):
|
||||
self.collection_name,
|
||||
using=self.alias,
|
||||
)
|
||||
if self.collection_properties is not None:
|
||||
self.col.set_properties(self.collection_properties)
|
||||
# If need to drop old, drop it
|
||||
if drop_old and isinstance(self.col, Collection):
|
||||
self.col.drop()
|
||||
@@ -332,6 +340,9 @@ class Milvus(VectorStore):
|
||||
consistency_level=self.consistency_level,
|
||||
using=self.alias,
|
||||
)
|
||||
# Set the collection properties if they exist
|
||||
if self.collection_properties is not None:
|
||||
self.col.set_properties(self.collection_properties)
|
||||
except MilvusException as e:
|
||||
logger.error(
|
||||
"Failed to create collection: %s error: %s", self.collection_name, e
|
||||
|
||||
@@ -4,16 +4,7 @@ import logging
|
||||
import os
|
||||
import uuid
|
||||
import warnings
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Callable,
|
||||
Iterable,
|
||||
List,
|
||||
Optional,
|
||||
Tuple,
|
||||
Union,
|
||||
)
|
||||
from typing import TYPE_CHECKING, Any, Callable, Iterable, List, Optional, Tuple, Union
|
||||
|
||||
import numpy as np
|
||||
from langchain_core.documents import Document
|
||||
@@ -33,6 +24,26 @@ if TYPE_CHECKING:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _import_pinecone() -> Any:
|
||||
try:
|
||||
import pinecone
|
||||
except ImportError as e:
|
||||
raise ImportError(
|
||||
"Could not import pinecone python package. "
|
||||
"Please install it with `pip install pinecone-client`."
|
||||
) from e
|
||||
return pinecone
|
||||
|
||||
|
||||
def _is_pinecone_v3() -> bool:
|
||||
pinecone = _import_pinecone()
|
||||
pinecone_client_version = pinecone.__version__
|
||||
if version.parse(pinecone_client_version) >= version.parse("3.0.0.dev"):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
class Pinecone(VectorStore):
|
||||
"""`Pinecone` vector store.
|
||||
|
||||
@@ -62,13 +73,7 @@ class Pinecone(VectorStore):
|
||||
distance_strategy: Optional[DistanceStrategy] = DistanceStrategy.COSINE,
|
||||
):
|
||||
"""Initialize with Pinecone client."""
|
||||
try:
|
||||
import pinecone
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Could not import pinecone python package. "
|
||||
"Please install it with `pip install pinecone-client`."
|
||||
)
|
||||
pinecone = _import_pinecone()
|
||||
if not isinstance(embedding, Embeddings):
|
||||
warnings.warn(
|
||||
"Passing in `embedding` as a Callable is deprecated. Please pass in an"
|
||||
@@ -361,17 +366,9 @@ class Pinecone(VectorStore):
|
||||
Returns:
|
||||
Pinecone Index instance."""
|
||||
|
||||
try:
|
||||
import pinecone
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import pinecone python package. "
|
||||
"Please install it with `pip install pinecone-client`."
|
||||
)
|
||||
pinecone = _import_pinecone()
|
||||
|
||||
pinecone_client_version = pinecone.__version__
|
||||
|
||||
if version.parse(pinecone_client_version) >= version.parse("3.0.0.dev"):
|
||||
if _is_pinecone_v3():
|
||||
pinecone_instance = pinecone.Pinecone(
|
||||
api_key=os.environ.get("PINECONE_API_KEY"), pool_threads=pool_threads
|
||||
)
|
||||
@@ -383,7 +380,7 @@ class Pinecone(VectorStore):
|
||||
if index_name in index_names:
|
||||
index = (
|
||||
pinecone_instance.Index(index_name)
|
||||
if version.parse(pinecone_client_version) >= version.parse("3.0.0")
|
||||
if not _is_pinecone_v3()
|
||||
else pinecone.Index(index_name, pool_threads=pool_threads)
|
||||
)
|
||||
elif len(index_names) == 0:
|
||||
|
||||
4
libs/community/poetry.lock
generated
4
libs/community/poetry.lock
generated
@@ -3881,7 +3881,7 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-core"
|
||||
version = "0.1.8"
|
||||
version = "0.1.9"
|
||||
description = "Building applications with LLMs through composability"
|
||||
optional = false
|
||||
python-versions = ">=3.8.1,<4.0"
|
||||
@@ -9144,4 +9144,4 @@ extended-testing = ["aiosqlite", "aleph-alpha-client", "anthropic", "arxiv", "as
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = ">=3.8.1,<4.0"
|
||||
content-hash = "a3765004062afc80420d49d782e3e023aec1707bf35ae2f808d88a1cac53b694"
|
||||
content-hash = "edadd024e8b2b4a817a90336013a1d92be102d03d4c41fbf5ac137f16d97fdfb"
|
||||
|
||||
@@ -9,7 +9,7 @@ repository = "https://github.com/langchain-ai/langchain"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
langchain-core = ">=0.1.8,<0.2"
|
||||
langchain-core = ">=0.1.9,<0.2"
|
||||
SQLAlchemy = ">=1.4,<3"
|
||||
requests = "^2"
|
||||
PyYAML = ">=5.3"
|
||||
|
||||
@@ -1,53 +0,0 @@
|
||||
from typing import cast
|
||||
|
||||
from langchain_core.pydantic_v1 import SecretStr
|
||||
from pytest import CaptureFixture, MonkeyPatch
|
||||
|
||||
from langchain_community.chat_models.baidu_qianfan_endpoint import (
|
||||
QianfanChatEndpoint,
|
||||
)
|
||||
|
||||
|
||||
def test_qianfan_key_masked_when_passed_from_env(
|
||||
monkeypatch: MonkeyPatch, capsys: CaptureFixture
|
||||
) -> None:
|
||||
"""Test initialization with an API key provided via an env variable"""
|
||||
monkeypatch.setenv("QIANFAN_AK", "test-api-key")
|
||||
monkeypatch.setenv("QIANFAN_SK", "test-secret-key")
|
||||
|
||||
chat = QianfanChatEndpoint()
|
||||
print(chat.qianfan_ak, end="")
|
||||
captured = capsys.readouterr()
|
||||
assert captured.out == "**********"
|
||||
|
||||
print(chat.qianfan_sk, end="")
|
||||
captured = capsys.readouterr()
|
||||
assert captured.out == "**********"
|
||||
|
||||
|
||||
def test_qianfan_key_masked_when_passed_via_constructor(
|
||||
capsys: CaptureFixture,
|
||||
) -> None:
|
||||
"""Test initialization with an API key provided via the initializer"""
|
||||
chat = QianfanChatEndpoint(
|
||||
qianfan_ak="test-api-key",
|
||||
qianfan_sk="test-secret-key",
|
||||
)
|
||||
print(chat.qianfan_ak, end="")
|
||||
captured = capsys.readouterr()
|
||||
assert captured.out == "**********"
|
||||
|
||||
print(chat.qianfan_sk, end="")
|
||||
captured = capsys.readouterr()
|
||||
|
||||
assert captured.out == "**********"
|
||||
|
||||
|
||||
def test_uses_actual_secret_value_from_secret_str() -> None:
|
||||
"""Test that actual secret is retrieved using `.get_secret_value()`."""
|
||||
chat = QianfanChatEndpoint(
|
||||
qianfan_ak="test-api-key",
|
||||
qianfan_sk="test-secret-key",
|
||||
)
|
||||
assert cast(SecretStr, chat.qianfan_ak).get_secret_value() == "test-api-key"
|
||||
assert cast(SecretStr, chat.qianfan_sk).get_secret_value() == "test-secret-key"
|
||||
|
||||
@@ -1,18 +1,24 @@
|
||||
"""Test Baidu Qianfan Chat Endpoint."""
|
||||
|
||||
from typing import Any
|
||||
from typing import Any, cast
|
||||
|
||||
import pytest
|
||||
from langchain_core.callbacks import CallbackManager
|
||||
from langchain_core.messages import (
|
||||
AIMessage,
|
||||
BaseMessage,
|
||||
BaseMessageChunk,
|
||||
FunctionMessage,
|
||||
HumanMessage,
|
||||
)
|
||||
from langchain_core.outputs import ChatGeneration, LLMResult
|
||||
from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
|
||||
from langchain_core.pydantic_v1 import SecretStr
|
||||
from pytest import CaptureFixture, MonkeyPatch
|
||||
|
||||
from langchain_community.chat_models.baidu_qianfan_endpoint import QianfanChatEndpoint
|
||||
from langchain_community.chat_models.baidu_qianfan_endpoint import (
|
||||
QianfanChatEndpoint,
|
||||
)
|
||||
from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler
|
||||
|
||||
_FUNCTIONS: Any = [
|
||||
@@ -139,6 +145,25 @@ def test_multiple_history() -> None:
|
||||
assert isinstance(response.content, str)
|
||||
|
||||
|
||||
def test_chat_generate() -> None:
|
||||
"""Tests chat generate works."""
|
||||
chat = QianfanChatEndpoint()
|
||||
response = chat.generate(
|
||||
[
|
||||
[
|
||||
HumanMessage(content="Hello."),
|
||||
AIMessage(content="Hello!"),
|
||||
HumanMessage(content="How are you doing?"),
|
||||
]
|
||||
]
|
||||
)
|
||||
assert isinstance(response, LLMResult)
|
||||
for generations in response.generations:
|
||||
for generation in generations:
|
||||
assert isinstance(generation, ChatGeneration)
|
||||
assert isinstance(generation.text, str)
|
||||
|
||||
|
||||
def test_stream() -> None:
|
||||
"""Test that stream works."""
|
||||
chat = QianfanChatEndpoint(streaming=True)
|
||||
@@ -156,6 +181,57 @@ def test_stream() -> None:
|
||||
assert callback_handler.llm_streams > 0
|
||||
assert isinstance(response.content, str)
|
||||
|
||||
res = chat.stream(
|
||||
[
|
||||
HumanMessage(content="Hello."),
|
||||
AIMessage(content="Hello!"),
|
||||
HumanMessage(content="Who are you?"),
|
||||
]
|
||||
)
|
||||
|
||||
assert len(list(res)) >= 1
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_async_invoke() -> None:
|
||||
chat = QianfanChatEndpoint()
|
||||
res = await chat.ainvoke([HumanMessage(content="Hello")])
|
||||
assert isinstance(res, BaseMessage)
|
||||
assert res.content != ""
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_async_generate() -> None:
|
||||
"""Tests chat agenerate works."""
|
||||
chat = QianfanChatEndpoint()
|
||||
response = await chat.agenerate(
|
||||
[
|
||||
[
|
||||
HumanMessage(content="Hello."),
|
||||
AIMessage(content="Hello!"),
|
||||
HumanMessage(content="How are you doing?"),
|
||||
]
|
||||
]
|
||||
)
|
||||
assert isinstance(response, LLMResult)
|
||||
for generations in response.generations:
|
||||
for generation in generations:
|
||||
assert isinstance(generation, ChatGeneration)
|
||||
assert isinstance(generation.text, str)
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_async_stream() -> None:
|
||||
chat = QianfanChatEndpoint(streaming=True)
|
||||
async for token in chat.astream(
|
||||
[
|
||||
HumanMessage(content="Hello."),
|
||||
AIMessage(content="Hello!"),
|
||||
HumanMessage(content="Who are you?"),
|
||||
]
|
||||
):
|
||||
assert isinstance(token, BaseMessageChunk)
|
||||
|
||||
|
||||
def test_multiple_messages() -> None:
|
||||
"""Tests multiple messages works."""
|
||||
@@ -232,3 +308,48 @@ def test_rate_limit() -> None:
|
||||
for res in responses:
|
||||
assert isinstance(res, BaseMessage)
|
||||
assert isinstance(res.content, str)
|
||||
|
||||
|
||||
def test_qianfan_key_masked_when_passed_from_env(
|
||||
monkeypatch: MonkeyPatch, capsys: CaptureFixture
|
||||
) -> None:
|
||||
"""Test initialization with an API key provided via an env variable"""
|
||||
monkeypatch.setenv("QIANFAN_AK", "test-api-key")
|
||||
monkeypatch.setenv("QIANFAN_SK", "test-secret-key")
|
||||
|
||||
chat = QianfanChatEndpoint()
|
||||
print(chat.qianfan_ak, end="")
|
||||
captured = capsys.readouterr()
|
||||
assert captured.out == "**********"
|
||||
|
||||
print(chat.qianfan_sk, end="")
|
||||
captured = capsys.readouterr()
|
||||
assert captured.out == "**********"
|
||||
|
||||
|
||||
def test_qianfan_key_masked_when_passed_via_constructor(
|
||||
capsys: CaptureFixture,
|
||||
) -> None:
|
||||
"""Test initialization with an API key provided via the initializer"""
|
||||
chat = QianfanChatEndpoint(
|
||||
qianfan_ak="test-api-key",
|
||||
qianfan_sk="test-secret-key",
|
||||
)
|
||||
print(chat.qianfan_ak, end="")
|
||||
captured = capsys.readouterr()
|
||||
assert captured.out == "**********"
|
||||
|
||||
print(chat.qianfan_sk, end="")
|
||||
captured = capsys.readouterr()
|
||||
|
||||
assert captured.out == "**********"
|
||||
|
||||
|
||||
def test_uses_actual_secret_value_from_secret_str() -> None:
|
||||
"""Test that actual secret is retrieved using `.get_secret_value()`."""
|
||||
chat = QianfanChatEndpoint(
|
||||
qianfan_ak="test-api-key",
|
||||
qianfan_sk="test-secret-key",
|
||||
)
|
||||
assert cast(SecretStr, chat.qianfan_ak).get_secret_value() == "test-api-key"
|
||||
assert cast(SecretStr, chat.qianfan_sk).get_secret_value() == "test-secret-key"
|
||||
|
||||
@@ -39,6 +39,7 @@ def deprecated(
|
||||
message: str = "",
|
||||
name: str = "",
|
||||
alternative: str = "",
|
||||
alternative_import: str = "",
|
||||
pending: bool = False,
|
||||
obj_type: str = "",
|
||||
addendum: str = "",
|
||||
@@ -105,6 +106,7 @@ def deprecated(
|
||||
_name: str = name,
|
||||
_message: str = message,
|
||||
_alternative: str = alternative,
|
||||
_alternative_import: str = alternative_import,
|
||||
_pending: bool = pending,
|
||||
_addendum: str = addendum,
|
||||
) -> T:
|
||||
@@ -117,6 +119,7 @@ def deprecated(
|
||||
message=_message,
|
||||
name=_name,
|
||||
alternative=_alternative,
|
||||
alternative_import=_alternative_import,
|
||||
pending=_pending,
|
||||
obj_type=_obj_type,
|
||||
addendum=_addendum,
|
||||
@@ -145,7 +148,9 @@ def deprecated(
|
||||
if not _obj_type:
|
||||
_obj_type = "class"
|
||||
wrapped = obj.__init__ # type: ignore
|
||||
_name = _name or obj.__name__
|
||||
_name = _name or (
|
||||
f"{obj.__module__}.{obj.__name__}" if obj.__module__ else obj.__name__
|
||||
)
|
||||
old_doc = obj.__doc__
|
||||
|
||||
def finalize(_: Any, new_doc: str) -> T:
|
||||
@@ -271,6 +276,7 @@ def warn_deprecated(
|
||||
message: str = "",
|
||||
name: str = "",
|
||||
alternative: str = "",
|
||||
alternative_import: str = "",
|
||||
pending: bool = False,
|
||||
obj_type: str = "",
|
||||
addendum: str = "",
|
||||
@@ -307,6 +313,10 @@ def warn_deprecated(
|
||||
"""
|
||||
if pending and removal:
|
||||
raise ValueError("A pending deprecation cannot have a scheduled removal")
|
||||
if alternative and alternative_import:
|
||||
raise ValueError("Cannot specify both alternative and alternative_import")
|
||||
if alternative_import and "." not in alternative_import:
|
||||
raise ValueError("alternative_import must be a fully qualified module path")
|
||||
|
||||
if not pending:
|
||||
if not removal:
|
||||
@@ -320,6 +330,7 @@ def warn_deprecated(
|
||||
|
||||
if not message:
|
||||
message = ""
|
||||
package = name.split(".")[0].replace("_", "-") if "." in name else "LangChain"
|
||||
|
||||
if obj_type:
|
||||
message += f"The {obj_type} `{name}`"
|
||||
@@ -329,12 +340,24 @@ def warn_deprecated(
|
||||
if pending:
|
||||
message += " will be deprecated in a future version"
|
||||
else:
|
||||
message += f" was deprecated in LangChain {since}"
|
||||
message += f" was deprecated in {package} {since}"
|
||||
|
||||
if removal:
|
||||
message += f" and will be removed {removal}"
|
||||
|
||||
if alternative:
|
||||
if alternative_import:
|
||||
alt_package = alternative_import.split(".")[0].replace("_", "-")
|
||||
if alt_package == package:
|
||||
message += f". Use {alternative_import} instead."
|
||||
else:
|
||||
alt_module, alt_name = alternative_import.rsplit(".", 1)
|
||||
message += (
|
||||
f". An updated version of the {obj_type} exists in the "
|
||||
f"{alt_package} package and should be used instead. To use it run "
|
||||
f"`pip install -U {alt_package}` and import as "
|
||||
f"`from {alt_module} import {alt_name}`."
|
||||
)
|
||||
elif alternative:
|
||||
message += f". Use {alternative} instead."
|
||||
|
||||
if addendum:
|
||||
|
||||
@@ -101,8 +101,9 @@ class ToolException(Exception):
|
||||
|
||||
pass
|
||||
|
||||
ToolInput = Union[str, Dict]
|
||||
|
||||
class BaseTool(RunnableSerializable[Union[str, Dict], Any]):
|
||||
class BaseTool(RunnableSerializable[ToolInput, Any]):
|
||||
"""Interface LangChain tools must implement."""
|
||||
|
||||
def __init_subclass__(cls, **kwargs: Any) -> None:
|
||||
|
||||
20
libs/core/poetry.lock
generated
20
libs/core/poetry.lock
generated
@@ -1164,16 +1164,6 @@ files = [
|
||||
{file = "MarkupSafe-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5bbe06f8eeafd38e5d0a4894ffec89378b6c6a625ff57e3028921f8ff59318ac"},
|
||||
{file = "MarkupSafe-2.1.3-cp311-cp311-win32.whl", hash = "sha256:dd15ff04ffd7e05ffcb7fe79f1b98041b8ea30ae9234aed2a9168b5797c3effb"},
|
||||
{file = "MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:134da1eca9ec0ae528110ccc9e48041e0828d79f24121a1a146161103c76e686"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:f698de3fd0c4e6972b92290a45bd9b1536bffe8c6759c62471efaa8acb4c37bc"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:aa57bd9cf8ae831a362185ee444e15a93ecb2e344c8e52e4d721ea3ab6ef1823"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ffcc3f7c66b5f5b7931a5aa68fc9cecc51e685ef90282f4a82f0f5e9b704ad11"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:47d4f1c5f80fc62fdd7777d0d40a2e9dda0a05883ab11374334f6c4de38adffd"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1f67c7038d560d92149c060157d623c542173016c4babc0c1913cca0564b9939"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:9aad3c1755095ce347e26488214ef77e0485a3c34a50c5a5e2471dff60b9dd9c"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:14ff806850827afd6b07a5f32bd917fb7f45b046ba40c57abdb636674a8b559c"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8f9293864fe09b8149f0cc42ce56e3f0e54de883a9de90cd427f191c346eb2e1"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-win32.whl", hash = "sha256:715d3562f79d540f251b99ebd6d8baa547118974341db04f5ad06d5ea3eb8007"},
|
||||
{file = "MarkupSafe-2.1.3-cp312-cp312-win_amd64.whl", hash = "sha256:1b8dd8c3fd14349433c79fa8abeb573a55fc0fdd769133baac1f5e07abf54aeb"},
|
||||
{file = "MarkupSafe-2.1.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:8e254ae696c88d98da6555f5ace2279cf7cd5b3f52be2b5cf97feafe883b58d2"},
|
||||
{file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cb0932dc158471523c9637e807d9bfb93e06a95cbf010f1a38b98623b929ef2b"},
|
||||
{file = "MarkupSafe-2.1.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9402b03f1a1b4dc4c19845e5c749e3ab82d5078d16a2a4c2cd2df62d57bb0707"},
|
||||
@@ -1953,7 +1943,6 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"},
|
||||
@@ -1961,15 +1950,8 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"},
|
||||
@@ -1986,7 +1968,6 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"},
|
||||
@@ -1994,7 +1975,6 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"},
|
||||
{file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"},
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "langchain-core"
|
||||
version = "0.1.8"
|
||||
version = "0.1.9"
|
||||
description = "Building applications with LLMs through composability"
|
||||
authors = []
|
||||
license = "MIT"
|
||||
|
||||
@@ -219,8 +219,8 @@ def test_whole_class_deprecation() -> None:
|
||||
assert len(warning_list) == 2
|
||||
warning = warning_list[0].message
|
||||
assert str(warning) == (
|
||||
"The class `DeprecatedClass` was deprecated in "
|
||||
"LangChain 2.0.0 and will be removed in 3.0.0"
|
||||
"The class `tests.unit_tests._api.test_deprecation.DeprecatedClass` was "
|
||||
"deprecated in tests 2.0.0 and will be removed in 3.0.0"
|
||||
)
|
||||
|
||||
warning = warning_list[1].message
|
||||
|
||||
159
libs/experimental/langchain_experimental/text_splitter.py
Normal file
159
libs/experimental/langchain_experimental/text_splitter.py
Normal file
@@ -0,0 +1,159 @@
|
||||
import copy
|
||||
import re
|
||||
from typing import Any, Iterable, List, Optional, Sequence, Tuple
|
||||
|
||||
import numpy as np
|
||||
from langchain_community.utils.math import (
|
||||
cosine_similarity,
|
||||
)
|
||||
from langchain_core.documents import BaseDocumentTransformer, Document
|
||||
from langchain_core.embeddings import Embeddings
|
||||
|
||||
|
||||
def combine_sentences(sentences: List[dict], buffer_size: int = 1) -> List[dict]:
|
||||
# Go through each sentence dict
|
||||
for i in range(len(sentences)):
|
||||
# Create a string that will hold the sentences which are joined
|
||||
combined_sentence = ""
|
||||
|
||||
# Add sentences before the current one, based on the buffer size.
|
||||
for j in range(i - buffer_size, i):
|
||||
# Check if the index j is not negative
|
||||
# (to avoid index out of range like on the first one)
|
||||
if j >= 0:
|
||||
# Add the sentence at index j to the combined_sentence string
|
||||
combined_sentence += sentences[j]["sentence"] + " "
|
||||
|
||||
# Add the current sentence
|
||||
combined_sentence += sentences[i]["sentence"]
|
||||
|
||||
# Add sentences after the current one, based on the buffer size
|
||||
for j in range(i + 1, i + 1 + buffer_size):
|
||||
# Check if the index j is within the range of the sentences list
|
||||
if j < len(sentences):
|
||||
# Add the sentence at index j to the combined_sentence string
|
||||
combined_sentence += " " + sentences[j]["sentence"]
|
||||
|
||||
# Then add the whole thing to your dict
|
||||
# Store the combined sentence in the current sentence dict
|
||||
sentences[i]["combined_sentence"] = combined_sentence
|
||||
|
||||
return sentences
|
||||
|
||||
|
||||
def calculate_cosine_distances(sentences: List[dict]) -> Tuple[List[float], List[dict]]:
|
||||
distances = []
|
||||
for i in range(len(sentences) - 1):
|
||||
embedding_current = sentences[i]["combined_sentence_embedding"]
|
||||
embedding_next = sentences[i + 1]["combined_sentence_embedding"]
|
||||
|
||||
# Calculate cosine similarity
|
||||
similarity = cosine_similarity([embedding_current], [embedding_next])[0][0]
|
||||
|
||||
# Convert to cosine distance
|
||||
distance = 1 - similarity
|
||||
|
||||
# Append cosine distance to the list
|
||||
distances.append(distance)
|
||||
|
||||
# Store distance in the dictionary
|
||||
sentences[i]["distance_to_next"] = distance
|
||||
|
||||
# Optionally handle the last sentence
|
||||
# sentences[-1]['distance_to_next'] = None # or a default value
|
||||
|
||||
return distances, sentences
|
||||
|
||||
|
||||
class SemanticChunker(BaseDocumentTransformer):
|
||||
"""Splits the text based on semantic similarity.
|
||||
|
||||
Taken from Greg Kamradt's wonderful notebook:
|
||||
https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/5_Levels_Of_Text_Splitting.ipynb
|
||||
|
||||
All credit to him.
|
||||
|
||||
At a high level, this splits into sentences, then groups into groups of 3
|
||||
sentences, and then merges one that are similar in the embedding space.
|
||||
"""
|
||||
|
||||
def __init__(self, embeddings: Embeddings, add_start_index: bool = False):
|
||||
self._add_start_index = add_start_index
|
||||
self.embeddings = embeddings
|
||||
|
||||
def split_text(self, text: str) -> List[str]:
|
||||
"""Split text into multiple components."""
|
||||
# Splitting the essay on '.', '?', and '!'
|
||||
single_sentences_list = re.split(r"(?<=[.?!])\s+", text)
|
||||
sentences = [
|
||||
{"sentence": x, "index": i} for i, x in enumerate(single_sentences_list)
|
||||
]
|
||||
sentences = combine_sentences(sentences)
|
||||
embeddings = self.embeddings.embed_documents(
|
||||
[x["combined_sentence"] for x in sentences]
|
||||
)
|
||||
for i, sentence in enumerate(sentences):
|
||||
sentence["combined_sentence_embedding"] = embeddings[i]
|
||||
distances, sentences = calculate_cosine_distances(sentences)
|
||||
start_index = 0
|
||||
|
||||
# Create a list to hold the grouped sentences
|
||||
chunks = []
|
||||
breakpoint_percentile_threshold = 95
|
||||
breakpoint_distance_threshold = np.percentile(
|
||||
distances, breakpoint_percentile_threshold
|
||||
) # If you want more chunks, lower the percentile cutoff
|
||||
|
||||
indices_above_thresh = [
|
||||
i for i, x in enumerate(distances) if x > breakpoint_distance_threshold
|
||||
] # The indices of those breakpoints on your list
|
||||
|
||||
# Iterate through the breakpoints to slice the sentences
|
||||
for index in indices_above_thresh:
|
||||
# The end index is the current breakpoint
|
||||
end_index = index
|
||||
|
||||
# Slice the sentence_dicts from the current start index to the end index
|
||||
group = sentences[start_index : end_index + 1]
|
||||
combined_text = " ".join([d["sentence"] for d in group])
|
||||
chunks.append(combined_text)
|
||||
|
||||
# Update the start index for the next group
|
||||
start_index = index + 1
|
||||
|
||||
# The last group, if any sentences remain
|
||||
if start_index < len(sentences):
|
||||
combined_text = " ".join([d["sentence"] for d in sentences[start_index:]])
|
||||
chunks.append(combined_text)
|
||||
return chunks
|
||||
|
||||
def create_documents(
|
||||
self, texts: List[str], metadatas: Optional[List[dict]] = None
|
||||
) -> List[Document]:
|
||||
"""Create documents from a list of texts."""
|
||||
_metadatas = metadatas or [{}] * len(texts)
|
||||
documents = []
|
||||
for i, text in enumerate(texts):
|
||||
index = -1
|
||||
for chunk in self.split_text(text):
|
||||
metadata = copy.deepcopy(_metadatas[i])
|
||||
if self._add_start_index:
|
||||
index = text.find(chunk, index + 1)
|
||||
metadata["start_index"] = index
|
||||
new_doc = Document(page_content=chunk, metadata=metadata)
|
||||
documents.append(new_doc)
|
||||
return documents
|
||||
|
||||
def split_documents(self, documents: Iterable[Document]) -> List[Document]:
|
||||
"""Split documents."""
|
||||
texts, metadatas = [], []
|
||||
for doc in documents:
|
||||
texts.append(doc.page_content)
|
||||
metadatas.append(doc.metadata)
|
||||
return self.create_documents(texts, metadatas=metadatas)
|
||||
|
||||
def transform_documents(
|
||||
self, documents: Sequence[Document], **kwargs: Any
|
||||
) -> Sequence[Document]:
|
||||
"""Transform sequence of documents by splitting them."""
|
||||
return self.split_documents(list(documents))
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "langchain-experimental"
|
||||
version = "0.0.48"
|
||||
version = "0.0.49"
|
||||
description = "Building applications with LLMs through composability"
|
||||
authors = []
|
||||
license = "MIT"
|
||||
|
||||
@@ -371,7 +371,7 @@ class RunnableAgent(BaseSingleActionAgent):
|
||||
callbacks: Callbacks = None,
|
||||
**kwargs: Any,
|
||||
) -> Union[AgentAction, AgentFinish]:
|
||||
"""Given input, decided what to do.
|
||||
"""Based on past history and current inputs, decide what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
@@ -383,8 +383,19 @@ class RunnableAgent(BaseSingleActionAgent):
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
|
||||
output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
|
||||
return output
|
||||
# Use streaming to make sure that the underlying LLM is invoked in a streaming
|
||||
# fashion to make it possible to get access to the individual LLM tokens
|
||||
# when using stream_log with the Agent Executor.
|
||||
# Because the response from the plan is not a generator, we need to
|
||||
# accumulate the output into final output and return that.
|
||||
final_output: Any = None
|
||||
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
|
||||
if final_output is None:
|
||||
final_output = chunk
|
||||
else:
|
||||
final_output += chunk
|
||||
|
||||
return final_output
|
||||
|
||||
async def aplan(
|
||||
self,
|
||||
@@ -395,20 +406,32 @@ class RunnableAgent(BaseSingleActionAgent):
|
||||
AgentAction,
|
||||
AgentFinish,
|
||||
]:
|
||||
"""Given input, decided what to do.
|
||||
"""Based on past history and current inputs, decide what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
along with observations
|
||||
callbacks: Callbacks to run.
|
||||
**kwargs: User inputs.
|
||||
**kwargs: User inputs
|
||||
|
||||
Returns:
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
|
||||
output = await self.runnable.ainvoke(inputs, config={"callbacks": callbacks})
|
||||
return output
|
||||
final_output: Any = None
|
||||
# Use streaming to make sure that the underlying LLM is invoked in a streaming
|
||||
# fashion to make it possible to get access to the individual LLM tokens
|
||||
# when using stream_log with the Agent Executor.
|
||||
# Because the response from the plan is not a generator, we need to
|
||||
# accumulate the output into final output and return that.
|
||||
async for chunk in self.runnable.astream(
|
||||
inputs, config={"callbacks": callbacks}
|
||||
):
|
||||
if final_output is None:
|
||||
final_output = chunk
|
||||
else:
|
||||
final_output += chunk
|
||||
return final_output
|
||||
|
||||
|
||||
class RunnableMultiActionAgent(BaseMultiActionAgent):
|
||||
@@ -447,7 +470,7 @@ class RunnableMultiActionAgent(BaseMultiActionAgent):
|
||||
List[AgentAction],
|
||||
AgentFinish,
|
||||
]:
|
||||
"""Given input, decided what to do.
|
||||
"""Based on past history and current inputs, decide what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
@@ -459,8 +482,19 @@ class RunnableMultiActionAgent(BaseMultiActionAgent):
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
|
||||
output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
|
||||
return output
|
||||
# Use streaming to make sure that the underlying LLM is invoked in a streaming
|
||||
# fashion to make it possible to get access to the individual LLM tokens
|
||||
# when using stream_log with the Agent Executor.
|
||||
# Because the response from the plan is not a generator, we need to
|
||||
# accumulate the output into final output and return that.
|
||||
final_output: Any = None
|
||||
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
|
||||
if final_output is None:
|
||||
final_output = chunk
|
||||
else:
|
||||
final_output += chunk
|
||||
|
||||
return final_output
|
||||
|
||||
async def aplan(
|
||||
self,
|
||||
@@ -471,7 +505,7 @@ class RunnableMultiActionAgent(BaseMultiActionAgent):
|
||||
List[AgentAction],
|
||||
AgentFinish,
|
||||
]:
|
||||
"""Given input, decided what to do.
|
||||
"""Based on past history and current inputs, decide what to do.
|
||||
|
||||
Args:
|
||||
intermediate_steps: Steps the LLM has taken to date,
|
||||
@@ -483,8 +517,21 @@ class RunnableMultiActionAgent(BaseMultiActionAgent):
|
||||
Action specifying what tool to use.
|
||||
"""
|
||||
inputs = {**kwargs, **{"intermediate_steps": intermediate_steps}}
|
||||
output = await self.runnable.ainvoke(inputs, config={"callbacks": callbacks})
|
||||
return output
|
||||
# Use streaming to make sure that the underlying LLM is invoked in a streaming
|
||||
# fashion to make it possible to get access to the individual LLM tokens
|
||||
# when using stream_log with the Agent Executor.
|
||||
# Because the response from the plan is not a generator, we need to
|
||||
# accumulate the output into final output and return that.
|
||||
final_output: Any = None
|
||||
async for chunk in self.runnable.astream(
|
||||
inputs, config={"callbacks": callbacks}
|
||||
):
|
||||
if final_output is None:
|
||||
final_output = chunk
|
||||
else:
|
||||
final_output += chunk
|
||||
|
||||
return final_output
|
||||
|
||||
|
||||
@deprecated(
|
||||
|
||||
@@ -19,7 +19,6 @@ def create_openai_tools_agent(
|
||||
|
||||
Examples:
|
||||
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from langchain import hub
|
||||
@@ -56,7 +55,6 @@ def create_openai_tools_agent(
|
||||
A runnable sequence representing an agent. It takes as input all the same input
|
||||
variables as the prompt passed in does. It returns as output either an
|
||||
AgentAction or AgentFinish.
|
||||
|
||||
"""
|
||||
missing_vars = {"agent_scratchpad"}.difference(prompt.input_variables)
|
||||
if missing_vars:
|
||||
|
||||
66
libs/langchain/langchain/tools/tool_executor.py
Normal file
66
libs/langchain/langchain/tools/tool_executor.py
Normal file
@@ -0,0 +1,66 @@
|
||||
from typing import Any, List, Optional, TypedDict, Union
|
||||
|
||||
from langchain_core.runnables import (
|
||||
RunnableConfig,
|
||||
RunnableSerializable,
|
||||
get_config_list,
|
||||
)
|
||||
from langchain_core.runnables.config import get_executor_for_config
|
||||
from langchain_core.tools import BaseTool, ToolInput
|
||||
|
||||
|
||||
class ToolInvocation(TypedDict):
|
||||
tool_name: str
|
||||
tool_input: ToolInput
|
||||
|
||||
|
||||
def _batch(tool, tool_inputs, config, return_exceptions):
|
||||
return tool.batch(tool_inputs, config=config, return_exceptions=return_exceptions)
|
||||
|
||||
|
||||
class ToolExecutor(RunnableSerializable[ToolInvocation, Any]):
|
||||
tools: List[BaseTool]
|
||||
|
||||
def _get_tool(self, tool_name: str) -> BaseTool:
|
||||
tool_map = {tool.name: tool for tool in self.tools}
|
||||
if tool_name not in tool_map:
|
||||
raise ValueError
|
||||
return tool_map[tool_name]
|
||||
|
||||
def invoke(
|
||||
self, input: ToolInvocation, config: Optional[RunnableConfig] = None
|
||||
) -> Any:
|
||||
tool = self._get_tool(input["tool_name"])
|
||||
return tool.invoke(input["tool_input"], config=config)
|
||||
|
||||
async def ainvoke(
|
||||
self, input: ToolInvocation, config: Optional[RunnableConfig] = None
|
||||
) -> Any:
|
||||
tool = self._get_tool(input["tool_name"])
|
||||
return await tool.ainvoke(input["tool_input"], config=config)
|
||||
|
||||
def batch(
|
||||
self,
|
||||
inputs: List[ToolInvocation],
|
||||
config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None,
|
||||
*,
|
||||
return_exceptions: bool = False,
|
||||
**kwargs: Optional[Any],
|
||||
) -> List[Any]:
|
||||
batch_by_tool = {}
|
||||
for input in inputs:
|
||||
batch_by_tool[input["tool_name"]] = batch_by_tool.get(
|
||||
input["tool_name"], []
|
||||
) + [input["tool_input"]]
|
||||
tools = list(batch_by_tool.keys())
|
||||
tools_inputs = list(batch_by_tool.values())
|
||||
configs = get_config_list(config, len(tools))
|
||||
return_exceptions_list = [return_exceptions] * len(tools)
|
||||
with get_executor_for_config(configs[0]) as executor:
|
||||
return (
|
||||
list(
|
||||
executor.map(
|
||||
_batch, tools, tools_inputs, configs, return_exceptions_list
|
||||
)
|
||||
),
|
||||
)
|
||||
@@ -1,16 +1,39 @@
|
||||
"""Unit tests for agents."""
|
||||
import json
|
||||
from itertools import cycle
|
||||
from typing import Any, Dict, List, Optional, Union, cast
|
||||
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from langchain_core.agents import AgentAction, AgentStep
|
||||
from langchain_core.agents import (
|
||||
AgentAction,
|
||||
AgentActionMessageLog,
|
||||
AgentFinish,
|
||||
AgentStep,
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.messages import AIMessage, HumanMessage
|
||||
from langchain_core.messages import (
|
||||
AIMessage,
|
||||
AIMessageChunk,
|
||||
FunctionMessage,
|
||||
HumanMessage,
|
||||
)
|
||||
from langchain_core.prompts import MessagesPlaceholder
|
||||
from langchain_core.runnables.utils import add
|
||||
from langchain_core.tools import Tool
|
||||
from langchain_core.tracers import RunLog, RunLogPatch
|
||||
|
||||
from langchain.agents import AgentExecutor, AgentType, initialize_agent
|
||||
from langchain.agents import (
|
||||
AgentExecutor,
|
||||
AgentType,
|
||||
create_openai_functions_agent,
|
||||
create_openai_tools_agent,
|
||||
initialize_agent,
|
||||
)
|
||||
from langchain.agents.output_parsers.openai_tools import OpenAIToolAgentAction
|
||||
from langchain.callbacks.manager import CallbackManagerForLLMRun
|
||||
from langchain.prompts import ChatPromptTemplate
|
||||
from langchain.tools import tool
|
||||
from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler
|
||||
from tests.unit_tests.llms.fake_chat_model import GenericFakeChatModel
|
||||
|
||||
|
||||
class FakeListLLM(LLM):
|
||||
@@ -414,3 +437,797 @@ def test_agent_invalid_tool() -> None:
|
||||
|
||||
resp = agent("when was langchain made")
|
||||
resp["intermediate_steps"][0][1] == "Foo is not a valid tool, try one of [Search]."
|
||||
|
||||
|
||||
async def test_runnable_agent() -> None:
|
||||
"""Simple test to verify that an agent built with LCEL works."""
|
||||
|
||||
# Will alternate between responding with hello and goodbye
|
||||
infinite_cycle = cycle([AIMessage(content="hello world!")])
|
||||
# When streaming GenericFakeChatModel breaks AIMessage into chunks based on spaces
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
|
||||
template = ChatPromptTemplate.from_messages(
|
||||
[("system", "You are Cat Agent 007"), ("human", "{question}")]
|
||||
)
|
||||
|
||||
def fake_parse(inputs: dict) -> Union[AgentFinish, AgentAction]:
|
||||
"""A parser."""
|
||||
return AgentFinish(return_values={"foo": "meow"}, log="hard-coded-message")
|
||||
|
||||
agent = template | model | fake_parse
|
||||
executor = AgentExecutor(agent=agent, tools=[])
|
||||
|
||||
# Invoke
|
||||
result = executor.invoke({"question": "hello"})
|
||||
assert result == {"foo": "meow", "question": "hello"}
|
||||
|
||||
# ainvoke
|
||||
result = await executor.ainvoke({"question": "hello"})
|
||||
assert result == {"foo": "meow", "question": "hello"}
|
||||
|
||||
# Batch
|
||||
result = executor.batch( # type: ignore[assignment]
|
||||
[{"question": "hello"}, {"question": "hello"}]
|
||||
)
|
||||
assert result == [
|
||||
{"foo": "meow", "question": "hello"},
|
||||
{"foo": "meow", "question": "hello"},
|
||||
]
|
||||
|
||||
# abatch
|
||||
result = await executor.abatch( # type: ignore[assignment]
|
||||
[{"question": "hello"}, {"question": "hello"}]
|
||||
)
|
||||
assert result == [
|
||||
{"foo": "meow", "question": "hello"},
|
||||
{"foo": "meow", "question": "hello"},
|
||||
]
|
||||
|
||||
# Stream
|
||||
results = list(executor.stream({"question": "hello"}))
|
||||
assert results == [
|
||||
{"foo": "meow", "messages": [AIMessage(content="hard-coded-message")]}
|
||||
]
|
||||
|
||||
# astream
|
||||
results = [r async for r in executor.astream({"question": "hello"})]
|
||||
assert results == [
|
||||
{
|
||||
"foo": "meow",
|
||||
"messages": [
|
||||
AIMessage(content="hard-coded-message"),
|
||||
],
|
||||
}
|
||||
]
|
||||
|
||||
# stream log
|
||||
results: List[RunLogPatch] = [ # type: ignore[no-redef]
|
||||
r async for r in executor.astream_log({"question": "hello"})
|
||||
]
|
||||
# # Let's stream just the llm tokens.
|
||||
messages = []
|
||||
for log_record in results:
|
||||
for op in log_record.ops: # type: ignore[attr-defined]
|
||||
if op["op"] == "add" and isinstance(op["value"], AIMessageChunk):
|
||||
messages.append(op["value"])
|
||||
|
||||
assert messages != []
|
||||
|
||||
# Aggregate state
|
||||
run_log = None
|
||||
|
||||
for result in results:
|
||||
if run_log is None:
|
||||
run_log = result
|
||||
else:
|
||||
# `+` is defined for RunLogPatch
|
||||
run_log = run_log + result # type: ignore[union-attr]
|
||||
|
||||
assert isinstance(run_log, RunLog)
|
||||
|
||||
assert run_log.state["final_output"] == {
|
||||
"foo": "meow",
|
||||
"messages": [AIMessage(content="hard-coded-message")],
|
||||
}
|
||||
|
||||
|
||||
async def test_runnable_agent_with_function_calls() -> None:
|
||||
"""Test agent with intermediate agent actions."""
|
||||
# Will alternate between responding with hello and goodbye
|
||||
infinite_cycle = cycle(
|
||||
[AIMessage(content="looking for pet..."), AIMessage(content="Found Pet")]
|
||||
)
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
|
||||
template = ChatPromptTemplate.from_messages(
|
||||
[("system", "You are Cat Agent 007"), ("human", "{question}")]
|
||||
)
|
||||
|
||||
parser_responses = cycle(
|
||||
[
|
||||
AgentAction(
|
||||
tool="find_pet",
|
||||
tool_input={
|
||||
"pet": "cat",
|
||||
},
|
||||
log="find_pet()",
|
||||
),
|
||||
AgentFinish(
|
||||
return_values={"foo": "meow"},
|
||||
log="hard-coded-message",
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
def fake_parse(inputs: dict) -> Union[AgentFinish, AgentAction]:
|
||||
"""A parser."""
|
||||
return cast(Union[AgentFinish, AgentAction], next(parser_responses))
|
||||
|
||||
@tool
|
||||
def find_pet(pet: str) -> str:
|
||||
"""Find the given pet."""
|
||||
if pet != "cat":
|
||||
raise ValueError("Only cats allowed")
|
||||
return "Spying from under the bed."
|
||||
|
||||
agent = template | model | fake_parse
|
||||
executor = AgentExecutor(agent=agent, tools=[find_pet])
|
||||
|
||||
# Invoke
|
||||
result = executor.invoke({"question": "hello"})
|
||||
assert result == {"foo": "meow", "question": "hello"}
|
||||
|
||||
# ainvoke
|
||||
result = await executor.ainvoke({"question": "hello"})
|
||||
assert result == {"foo": "meow", "question": "hello"}
|
||||
|
||||
# astream
|
||||
results = [r async for r in executor.astream({"question": "hello"})]
|
||||
assert results == [
|
||||
{
|
||||
"actions": [
|
||||
AgentAction(
|
||||
tool="find_pet", tool_input={"pet": "cat"}, log="find_pet()"
|
||||
)
|
||||
],
|
||||
"messages": [AIMessage(content="find_pet()")],
|
||||
},
|
||||
{
|
||||
"messages": [HumanMessage(content="Spying from under the bed.")],
|
||||
"steps": [
|
||||
AgentStep(
|
||||
action=AgentAction(
|
||||
tool="find_pet", tool_input={"pet": "cat"}, log="find_pet()"
|
||||
),
|
||||
observation="Spying from under the bed.",
|
||||
)
|
||||
],
|
||||
},
|
||||
{"foo": "meow", "messages": [AIMessage(content="hard-coded-message")]},
|
||||
]
|
||||
|
||||
# astream log
|
||||
|
||||
messages = []
|
||||
async for patch in executor.astream_log({"question": "hello"}):
|
||||
for op in patch.ops:
|
||||
if op["op"] != "add":
|
||||
continue
|
||||
|
||||
value = op["value"]
|
||||
|
||||
if not isinstance(value, AIMessageChunk):
|
||||
continue
|
||||
|
||||
if value.content == "": # Then it's a function invocation message
|
||||
continue
|
||||
|
||||
messages.append(value.content)
|
||||
|
||||
assert messages == ["looking", " ", "for", " ", "pet...", "Found", " ", "Pet"]
|
||||
|
||||
|
||||
async def test_runnable_with_multi_action_per_step() -> None:
|
||||
"""Test an agent that can make multiple function calls at once."""
|
||||
# Will alternate between responding with hello and goodbye
|
||||
infinite_cycle = cycle(
|
||||
[AIMessage(content="looking for pet..."), AIMessage(content="Found Pet")]
|
||||
)
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
|
||||
template = ChatPromptTemplate.from_messages(
|
||||
[("system", "You are Cat Agent 007"), ("human", "{question}")]
|
||||
)
|
||||
|
||||
parser_responses = cycle(
|
||||
[
|
||||
[
|
||||
AgentAction(
|
||||
tool="find_pet",
|
||||
tool_input={
|
||||
"pet": "cat",
|
||||
},
|
||||
log="find_pet()",
|
||||
),
|
||||
AgentAction(
|
||||
tool="pet_pet", # A function that allows you to pet the given pet.
|
||||
tool_input={
|
||||
"pet": "cat",
|
||||
},
|
||||
log="pet_pet()",
|
||||
),
|
||||
],
|
||||
AgentFinish(
|
||||
return_values={"foo": "meow"},
|
||||
log="hard-coded-message",
|
||||
),
|
||||
],
|
||||
)
|
||||
|
||||
def fake_parse(inputs: dict) -> Union[AgentFinish, AgentAction]:
|
||||
"""A parser."""
|
||||
return cast(Union[AgentFinish, AgentAction], next(parser_responses))
|
||||
|
||||
@tool
|
||||
def find_pet(pet: str) -> str:
|
||||
"""Find the given pet."""
|
||||
if pet != "cat":
|
||||
raise ValueError("Only cats allowed")
|
||||
return "Spying from under the bed."
|
||||
|
||||
@tool
|
||||
def pet_pet(pet: str) -> str:
|
||||
"""Pet the given pet."""
|
||||
if pet != "cat":
|
||||
raise ValueError("Only cats should be petted.")
|
||||
return "purrrr"
|
||||
|
||||
agent = template | model | fake_parse
|
||||
executor = AgentExecutor(agent=agent, tools=[find_pet])
|
||||
|
||||
# Invoke
|
||||
result = executor.invoke({"question": "hello"})
|
||||
assert result == {"foo": "meow", "question": "hello"}
|
||||
|
||||
# ainvoke
|
||||
result = await executor.ainvoke({"question": "hello"})
|
||||
assert result == {"foo": "meow", "question": "hello"}
|
||||
|
||||
# astream
|
||||
results = [r async for r in executor.astream({"question": "hello"})]
|
||||
assert results == [
|
||||
{
|
||||
"actions": [
|
||||
AgentAction(
|
||||
tool="find_pet", tool_input={"pet": "cat"}, log="find_pet()"
|
||||
)
|
||||
],
|
||||
"messages": [AIMessage(content="find_pet()")],
|
||||
},
|
||||
{
|
||||
"actions": [
|
||||
AgentAction(tool="pet_pet", tool_input={"pet": "cat"}, log="pet_pet()")
|
||||
],
|
||||
"messages": [AIMessage(content="pet_pet()")],
|
||||
},
|
||||
{
|
||||
# By-default observation gets converted into human message.
|
||||
"messages": [HumanMessage(content="Spying from under the bed.")],
|
||||
"steps": [
|
||||
AgentStep(
|
||||
action=AgentAction(
|
||||
tool="find_pet", tool_input={"pet": "cat"}, log="find_pet()"
|
||||
),
|
||||
observation="Spying from under the bed.",
|
||||
)
|
||||
],
|
||||
},
|
||||
{
|
||||
"messages": [
|
||||
HumanMessage(
|
||||
content="pet_pet is not a valid tool, try one of [find_pet]."
|
||||
)
|
||||
],
|
||||
"steps": [
|
||||
AgentStep(
|
||||
action=AgentAction(
|
||||
tool="pet_pet", tool_input={"pet": "cat"}, log="pet_pet()"
|
||||
),
|
||||
observation="pet_pet is not a valid tool, try one of [find_pet].",
|
||||
)
|
||||
],
|
||||
},
|
||||
{"foo": "meow", "messages": [AIMessage(content="hard-coded-message")]},
|
||||
]
|
||||
|
||||
# astream log
|
||||
|
||||
messages = []
|
||||
async for patch in executor.astream_log({"question": "hello"}):
|
||||
for op in patch.ops:
|
||||
if op["op"] != "add":
|
||||
continue
|
||||
|
||||
value = op["value"]
|
||||
|
||||
if not isinstance(value, AIMessageChunk):
|
||||
continue
|
||||
|
||||
if value.content == "": # Then it's a function invocation message
|
||||
continue
|
||||
|
||||
messages.append(value.content)
|
||||
|
||||
assert messages == ["looking", " ", "for", " ", "pet...", "Found", " ", "Pet"]
|
||||
|
||||
|
||||
def _make_func_invocation(name: str, **kwargs: Any) -> AIMessage:
|
||||
"""Create an AIMessage that represents a function invocation.
|
||||
|
||||
Args:
|
||||
name: Name of the function to invoke.
|
||||
kwargs: Keyword arguments to pass to the function.
|
||||
|
||||
Returns:
|
||||
AIMessage that represents a request to invoke a function.
|
||||
"""
|
||||
return AIMessage(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"function_call": {
|
||||
"name": name,
|
||||
"arguments": json.dumps(kwargs),
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
async def test_openai_agent_with_streaming() -> None:
|
||||
"""Test openai agent with streaming."""
|
||||
infinite_cycle = cycle(
|
||||
[
|
||||
_make_func_invocation("find_pet", pet="cat"),
|
||||
AIMessage(content="The cat is spying from under the bed."),
|
||||
]
|
||||
)
|
||||
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
|
||||
@tool
|
||||
def find_pet(pet: str) -> str:
|
||||
"""Find the given pet."""
|
||||
if pet != "cat":
|
||||
raise ValueError("Only cats allowed")
|
||||
return "Spying from under the bed."
|
||||
|
||||
template = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
("system", "You are a helpful AI bot. Your name is kitty power meow."),
|
||||
("human", "{question}"),
|
||||
MessagesPlaceholder(
|
||||
variable_name="agent_scratchpad",
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
# type error due to base tool type below -- would need to be adjusted on tool
|
||||
# decorator.
|
||||
agent = create_openai_functions_agent(
|
||||
model,
|
||||
[find_pet], # type: ignore[list-item]
|
||||
template,
|
||||
)
|
||||
executor = AgentExecutor(agent=agent, tools=[find_pet])
|
||||
|
||||
# Invoke
|
||||
result = executor.invoke({"question": "hello"})
|
||||
assert result == {
|
||||
"output": "The cat is spying from under the bed.",
|
||||
"question": "hello",
|
||||
}
|
||||
|
||||
# astream
|
||||
chunks = [chunk async for chunk in executor.astream({"question": "hello"})]
|
||||
assert chunks == [
|
||||
{
|
||||
"actions": [
|
||||
AgentActionMessageLog(
|
||||
tool="find_pet",
|
||||
tool_input={"pet": "cat"},
|
||||
log="\nInvoking: `find_pet` with `{'pet': 'cat'}`\n\n\n",
|
||||
message_log=[
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"function_call": {
|
||||
"name": "find_pet",
|
||||
"arguments": '{"pet": "cat"}',
|
||||
}
|
||||
},
|
||||
)
|
||||
],
|
||||
)
|
||||
],
|
||||
"messages": [
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"function_call": {
|
||||
"name": "find_pet",
|
||||
"arguments": '{"pet": "cat"}',
|
||||
}
|
||||
},
|
||||
)
|
||||
],
|
||||
},
|
||||
{
|
||||
"messages": [
|
||||
FunctionMessage(content="Spying from under the bed.", name="find_pet")
|
||||
],
|
||||
"steps": [
|
||||
AgentStep(
|
||||
action=AgentActionMessageLog(
|
||||
tool="find_pet",
|
||||
tool_input={"pet": "cat"},
|
||||
log="\nInvoking: `find_pet` with `{'pet': 'cat'}`\n\n\n",
|
||||
message_log=[
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"function_call": {
|
||||
"name": "find_pet",
|
||||
"arguments": '{"pet": "cat"}',
|
||||
}
|
||||
},
|
||||
)
|
||||
],
|
||||
),
|
||||
observation="Spying from under the bed.",
|
||||
)
|
||||
],
|
||||
},
|
||||
{
|
||||
"messages": [AIMessage(content="The cat is spying from under the bed.")],
|
||||
"output": "The cat is spying from under the bed.",
|
||||
},
|
||||
]
|
||||
#
|
||||
# # astream_log
|
||||
log_patches = [
|
||||
log_patch async for log_patch in executor.astream_log({"question": "hello"})
|
||||
]
|
||||
|
||||
messages = []
|
||||
|
||||
for log_patch in log_patches:
|
||||
for op in log_patch.ops:
|
||||
if op["op"] == "add" and isinstance(op["value"], AIMessageChunk):
|
||||
value = op["value"]
|
||||
if value.content: # Filter out function call messages
|
||||
messages.append(value.content)
|
||||
|
||||
assert messages == [
|
||||
"The",
|
||||
" ",
|
||||
"cat",
|
||||
" ",
|
||||
"is",
|
||||
" ",
|
||||
"spying",
|
||||
" ",
|
||||
"from",
|
||||
" ",
|
||||
"under",
|
||||
" ",
|
||||
"the",
|
||||
" ",
|
||||
"bed.",
|
||||
]
|
||||
|
||||
|
||||
def _make_tools_invocation(name_to_arguments: Dict[str, Dict[str, Any]]) -> AIMessage:
|
||||
"""Create an AIMessage that represents a tools invocation.
|
||||
|
||||
Args:
|
||||
name_to_arguments: A dictionary mapping tool names to an invocation.
|
||||
|
||||
Returns:
|
||||
AIMessage that represents a request to invoke a tool.
|
||||
"""
|
||||
tool_calls = [
|
||||
{"function": {"name": name, "arguments": json.dumps(arguments)}, "id": idx}
|
||||
for idx, (name, arguments) in enumerate(name_to_arguments.items())
|
||||
]
|
||||
|
||||
return AIMessage(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"tool_calls": tool_calls,
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
async def test_openai_agent_tools_agent() -> None:
|
||||
"""Test OpenAI tools agent."""
|
||||
infinite_cycle = cycle(
|
||||
[
|
||||
_make_tools_invocation(
|
||||
{
|
||||
"find_pet": {"pet": "cat"},
|
||||
"check_time": {},
|
||||
}
|
||||
),
|
||||
AIMessage(content="The cat is spying from under the bed."),
|
||||
]
|
||||
)
|
||||
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
|
||||
@tool
|
||||
def find_pet(pet: str) -> str:
|
||||
"""Find the given pet."""
|
||||
if pet != "cat":
|
||||
raise ValueError("Only cats allowed")
|
||||
return "Spying from under the bed."
|
||||
|
||||
@tool
|
||||
def check_time() -> str:
|
||||
"""Find the given pet."""
|
||||
return "It's time to pet the cat."
|
||||
|
||||
template = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
("system", "You are a helpful AI bot. Your name is kitty power meow."),
|
||||
("human", "{question}"),
|
||||
MessagesPlaceholder(
|
||||
variable_name="agent_scratchpad",
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
# type error due to base tool type below -- would need to be adjusted on tool
|
||||
# decorator.
|
||||
agent = create_openai_tools_agent(
|
||||
model,
|
||||
[find_pet], # type: ignore[list-item]
|
||||
template,
|
||||
)
|
||||
executor = AgentExecutor(agent=agent, tools=[find_pet])
|
||||
|
||||
# Invoke
|
||||
result = executor.invoke({"question": "hello"})
|
||||
assert result == {
|
||||
"output": "The cat is spying from under the bed.",
|
||||
"question": "hello",
|
||||
}
|
||||
|
||||
# astream
|
||||
chunks = [chunk async for chunk in executor.astream({"question": "hello"})]
|
||||
assert chunks == [
|
||||
{
|
||||
"actions": [
|
||||
OpenAIToolAgentAction(
|
||||
tool="find_pet",
|
||||
tool_input={"pet": "cat"},
|
||||
log="\nInvoking: `find_pet` with `{'pet': 'cat'}`\n\n\n",
|
||||
message_log=[
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"tool_calls": [
|
||||
{
|
||||
"function": {
|
||||
"name": "find_pet",
|
||||
"arguments": '{"pet": "cat"}',
|
||||
},
|
||||
"id": 0,
|
||||
},
|
||||
{
|
||||
"function": {
|
||||
"name": "check_time",
|
||||
"arguments": "{}",
|
||||
},
|
||||
"id": 1,
|
||||
},
|
||||
]
|
||||
},
|
||||
)
|
||||
],
|
||||
tool_call_id="0",
|
||||
)
|
||||
],
|
||||
"messages": [
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"tool_calls": [
|
||||
{
|
||||
"function": {
|
||||
"name": "find_pet",
|
||||
"arguments": '{"pet": "cat"}',
|
||||
},
|
||||
"id": 0,
|
||||
},
|
||||
{
|
||||
"function": {"name": "check_time", "arguments": "{}"},
|
||||
"id": 1,
|
||||
},
|
||||
]
|
||||
},
|
||||
)
|
||||
],
|
||||
},
|
||||
{
|
||||
"actions": [
|
||||
OpenAIToolAgentAction(
|
||||
tool="check_time",
|
||||
tool_input={},
|
||||
log="\nInvoking: `check_time` with `{}`\n\n\n",
|
||||
message_log=[
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"tool_calls": [
|
||||
{
|
||||
"function": {
|
||||
"name": "find_pet",
|
||||
"arguments": '{"pet": "cat"}',
|
||||
},
|
||||
"id": 0,
|
||||
},
|
||||
{
|
||||
"function": {
|
||||
"name": "check_time",
|
||||
"arguments": "{}",
|
||||
},
|
||||
"id": 1,
|
||||
},
|
||||
]
|
||||
},
|
||||
)
|
||||
],
|
||||
tool_call_id="1",
|
||||
)
|
||||
],
|
||||
"messages": [
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"tool_calls": [
|
||||
{
|
||||
"function": {
|
||||
"name": "find_pet",
|
||||
"arguments": '{"pet": "cat"}',
|
||||
},
|
||||
"id": 0,
|
||||
},
|
||||
{
|
||||
"function": {"name": "check_time", "arguments": "{}"},
|
||||
"id": 1,
|
||||
},
|
||||
]
|
||||
},
|
||||
)
|
||||
],
|
||||
},
|
||||
{
|
||||
"messages": [
|
||||
FunctionMessage(content="Spying from under the bed.", name="find_pet")
|
||||
],
|
||||
"steps": [
|
||||
AgentStep(
|
||||
action=OpenAIToolAgentAction(
|
||||
tool="find_pet",
|
||||
tool_input={"pet": "cat"},
|
||||
log="\nInvoking: `find_pet` with `{'pet': 'cat'}`\n\n\n",
|
||||
message_log=[
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"tool_calls": [
|
||||
{
|
||||
"function": {
|
||||
"name": "find_pet",
|
||||
"arguments": '{"pet": "cat"}',
|
||||
},
|
||||
"id": 0,
|
||||
},
|
||||
{
|
||||
"function": {
|
||||
"name": "check_time",
|
||||
"arguments": "{}",
|
||||
},
|
||||
"id": 1,
|
||||
},
|
||||
]
|
||||
},
|
||||
)
|
||||
],
|
||||
tool_call_id="0",
|
||||
),
|
||||
observation="Spying from under the bed.",
|
||||
)
|
||||
],
|
||||
},
|
||||
{
|
||||
"messages": [
|
||||
FunctionMessage(
|
||||
content="check_time is not a valid tool, try one of [find_pet].",
|
||||
name="check_time",
|
||||
)
|
||||
],
|
||||
"steps": [
|
||||
AgentStep(
|
||||
action=OpenAIToolAgentAction(
|
||||
tool="check_time",
|
||||
tool_input={},
|
||||
log="\nInvoking: `check_time` with `{}`\n\n\n",
|
||||
message_log=[
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"tool_calls": [
|
||||
{
|
||||
"function": {
|
||||
"name": "find_pet",
|
||||
"arguments": '{"pet": "cat"}',
|
||||
},
|
||||
"id": 0,
|
||||
},
|
||||
{
|
||||
"function": {
|
||||
"name": "check_time",
|
||||
"arguments": "{}",
|
||||
},
|
||||
"id": 1,
|
||||
},
|
||||
]
|
||||
},
|
||||
)
|
||||
],
|
||||
tool_call_id="1",
|
||||
),
|
||||
observation="check_time is not a valid tool, "
|
||||
"try one of [find_pet].",
|
||||
)
|
||||
],
|
||||
},
|
||||
{
|
||||
"messages": [AIMessage(content="The cat is spying from under the bed.")],
|
||||
"output": "The cat is spying from under the bed.",
|
||||
},
|
||||
]
|
||||
|
||||
# astream_log
|
||||
log_patches = [
|
||||
log_patch async for log_patch in executor.astream_log({"question": "hello"})
|
||||
]
|
||||
|
||||
# Get the tokens from the astream log response.
|
||||
messages = []
|
||||
|
||||
for log_patch in log_patches:
|
||||
for op in log_patch.ops:
|
||||
if op["op"] == "add" and isinstance(op["value"], AIMessageChunk):
|
||||
value = op["value"]
|
||||
if value.content: # Filter out function call messages
|
||||
messages.append(value.content)
|
||||
|
||||
assert messages == [
|
||||
"The",
|
||||
" ",
|
||||
"cat",
|
||||
" ",
|
||||
"is",
|
||||
" ",
|
||||
"spying",
|
||||
" ",
|
||||
"from",
|
||||
" ",
|
||||
"under",
|
||||
" ",
|
||||
"the",
|
||||
" ",
|
||||
"bed.",
|
||||
]
|
||||
|
||||
@@ -1,9 +1,15 @@
|
||||
"""Fake Chat Model wrapper for testing purposes."""
|
||||
from typing import Any, Dict, List, Optional
|
||||
import re
|
||||
from typing import Any, AsyncIterator, Dict, Iterator, List, Optional, cast
|
||||
|
||||
from langchain_core.language_models.chat_models import SimpleChatModel
|
||||
from langchain_core.messages import AIMessage, BaseMessage
|
||||
from langchain_core.outputs import ChatGeneration, ChatResult
|
||||
from langchain_core.language_models.chat_models import BaseChatModel, SimpleChatModel
|
||||
from langchain_core.messages import (
|
||||
AIMessage,
|
||||
AIMessageChunk,
|
||||
BaseMessage,
|
||||
)
|
||||
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
|
||||
from langchain_core.runnables import run_in_executor
|
||||
|
||||
from langchain.callbacks.manager import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
@@ -42,3 +48,151 @@ class FakeChatModel(SimpleChatModel):
|
||||
@property
|
||||
def _identifying_params(self) -> Dict[str, Any]:
|
||||
return {"key": "fake"}
|
||||
|
||||
|
||||
class GenericFakeChatModel(BaseChatModel):
|
||||
"""A generic fake chat model that can be used to test the chat model interface.
|
||||
|
||||
* Chat model should be usable in both sync and async tests
|
||||
* Invokes on_llm_new_token to allow for testing of callback related code for new
|
||||
tokens.
|
||||
* Includes logic to break messages into message chunk to facilitate testing of
|
||||
streaming.
|
||||
"""
|
||||
|
||||
messages: Iterator[AIMessage]
|
||||
"""Get an iterator over messages.
|
||||
|
||||
This can be expanded to accept other types like Callables / dicts / strings
|
||||
to make the interface more generic if needed.
|
||||
|
||||
Note: if you want to pass a list, you can use `iter` to convert it to an iterator.
|
||||
|
||||
Please note that streaming is not implemented yet. We should try to implement it
|
||||
in the future by delegating to invoke and then breaking the resulting output
|
||||
into message chunks.
|
||||
"""
|
||||
|
||||
def _generate(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> ChatResult:
|
||||
"""Top Level call"""
|
||||
message = next(self.messages)
|
||||
generation = ChatGeneration(message=message)
|
||||
return ChatResult(generations=[generation])
|
||||
|
||||
def _stream(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> Iterator[ChatGenerationChunk]:
|
||||
"""Stream the output of the model."""
|
||||
chat_result = self._generate(
|
||||
messages, stop=stop, run_manager=run_manager, **kwargs
|
||||
)
|
||||
if not isinstance(chat_result, ChatResult):
|
||||
raise ValueError(
|
||||
f"Expected generate to return a ChatResult, "
|
||||
f"but got {type(chat_result)} instead."
|
||||
)
|
||||
|
||||
message = chat_result.generations[0].message
|
||||
|
||||
if not isinstance(message, AIMessage):
|
||||
raise ValueError(
|
||||
f"Expected invoke to return an AIMessage, "
|
||||
f"but got {type(message)} instead."
|
||||
)
|
||||
|
||||
content = message.content
|
||||
|
||||
if content:
|
||||
# Use a regular expression to split on whitespace with a capture group
|
||||
# so that we can preserve the whitespace in the output.
|
||||
assert isinstance(content, str)
|
||||
content_chunks = cast(List[str], re.split(r"(\s)", content))
|
||||
|
||||
for token in content_chunks:
|
||||
chunk = ChatGenerationChunk(message=AIMessageChunk(content=token))
|
||||
yield chunk
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(token, chunk=chunk)
|
||||
|
||||
if message.additional_kwargs:
|
||||
for key, value in message.additional_kwargs.items():
|
||||
# We should further break down the additional kwargs into chunks
|
||||
# Special case for function call
|
||||
if key == "function_call":
|
||||
for fkey, fvalue in value.items():
|
||||
if isinstance(fvalue, str):
|
||||
# Break function call by `,`
|
||||
fvalue_chunks = cast(List[str], re.split(r"(,)", fvalue))
|
||||
for fvalue_chunk in fvalue_chunks:
|
||||
chunk = ChatGenerationChunk(
|
||||
message=AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"function_call": {fkey: fvalue_chunk}
|
||||
},
|
||||
)
|
||||
)
|
||||
yield chunk
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(
|
||||
"",
|
||||
chunk=chunk, # No token for function call
|
||||
)
|
||||
else:
|
||||
chunk = ChatGenerationChunk(
|
||||
message=AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={"function_call": {fkey: fvalue}},
|
||||
)
|
||||
)
|
||||
yield chunk
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(
|
||||
"",
|
||||
chunk=chunk, # No token for function call
|
||||
)
|
||||
else:
|
||||
chunk = ChatGenerationChunk(
|
||||
message=AIMessageChunk(
|
||||
content="", additional_kwargs={key: value}
|
||||
)
|
||||
)
|
||||
yield chunk
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(
|
||||
"",
|
||||
chunk=chunk, # No token for function call
|
||||
)
|
||||
|
||||
async def _astream(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> AsyncIterator[ChatGenerationChunk]:
|
||||
"""Stream the output of the model."""
|
||||
result = await run_in_executor(
|
||||
None,
|
||||
self._stream,
|
||||
messages,
|
||||
stop=stop,
|
||||
run_manager=run_manager.get_sync() if run_manager else None,
|
||||
**kwargs,
|
||||
)
|
||||
for chunk in result:
|
||||
yield chunk
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
return "generic-fake-chat-model"
|
||||
|
||||
185
libs/langchain/tests/unit_tests/llms/test_fake_chat_model.py
Normal file
185
libs/langchain/tests/unit_tests/llms/test_fake_chat_model.py
Normal file
@@ -0,0 +1,185 @@
|
||||
"""Tests for verifying that testing utility code works as expected."""
|
||||
from itertools import cycle
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
from uuid import UUID
|
||||
|
||||
from langchain_core.messages import AIMessage, AIMessageChunk, BaseMessage
|
||||
from langchain_core.outputs import ChatGenerationChunk, GenerationChunk
|
||||
|
||||
from langchain.callbacks.base import AsyncCallbackHandler
|
||||
from tests.unit_tests.llms.fake_chat_model import GenericFakeChatModel
|
||||
|
||||
|
||||
def test_generic_fake_chat_model_invoke() -> None:
|
||||
# Will alternate between responding with hello and goodbye
|
||||
infinite_cycle = cycle([AIMessage(content="hello"), AIMessage(content="goodbye")])
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
response = model.invoke("meow")
|
||||
assert response == AIMessage(content="hello")
|
||||
response = model.invoke("kitty")
|
||||
assert response == AIMessage(content="goodbye")
|
||||
response = model.invoke("meow")
|
||||
assert response == AIMessage(content="hello")
|
||||
|
||||
|
||||
async def test_generic_fake_chat_model_ainvoke() -> None:
|
||||
# Will alternate between responding with hello and goodbye
|
||||
infinite_cycle = cycle([AIMessage(content="hello"), AIMessage(content="goodbye")])
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
response = await model.ainvoke("meow")
|
||||
assert response == AIMessage(content="hello")
|
||||
response = await model.ainvoke("kitty")
|
||||
assert response == AIMessage(content="goodbye")
|
||||
response = await model.ainvoke("meow")
|
||||
assert response == AIMessage(content="hello")
|
||||
|
||||
|
||||
async def test_generic_fake_chat_model_stream() -> None:
|
||||
"""Test streaming."""
|
||||
infinite_cycle = cycle(
|
||||
[
|
||||
AIMessage(content="hello goodbye"),
|
||||
]
|
||||
)
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
chunks = [chunk async for chunk in model.astream("meow")]
|
||||
assert chunks == [
|
||||
AIMessageChunk(content="hello"),
|
||||
AIMessageChunk(content=" "),
|
||||
AIMessageChunk(content="goodbye"),
|
||||
]
|
||||
|
||||
chunks = [chunk for chunk in model.stream("meow")]
|
||||
assert chunks == [
|
||||
AIMessageChunk(content="hello"),
|
||||
AIMessageChunk(content=" "),
|
||||
AIMessageChunk(content="goodbye"),
|
||||
]
|
||||
|
||||
# Test streaming of additional kwargs.
|
||||
# Relying on insertion order of the additional kwargs dict
|
||||
message = AIMessage(content="", additional_kwargs={"foo": 42, "bar": 24})
|
||||
model = GenericFakeChatModel(messages=cycle([message]))
|
||||
chunks = [chunk async for chunk in model.astream("meow")]
|
||||
assert chunks == [
|
||||
AIMessageChunk(content="", additional_kwargs={"foo": 42}),
|
||||
AIMessageChunk(content="", additional_kwargs={"bar": 24}),
|
||||
]
|
||||
|
||||
message = AIMessage(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"function_call": {
|
||||
"name": "move_file",
|
||||
"arguments": '{\n "source_path": "foo",\n "'
|
||||
'destination_path": "bar"\n}',
|
||||
}
|
||||
},
|
||||
)
|
||||
model = GenericFakeChatModel(messages=cycle([message]))
|
||||
chunks = [chunk async for chunk in model.astream("meow")]
|
||||
|
||||
assert chunks == [
|
||||
AIMessageChunk(
|
||||
content="", additional_kwargs={"function_call": {"name": "move_file"}}
|
||||
),
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"function_call": {"arguments": '{\n "source_path": "foo"'}
|
||||
},
|
||||
),
|
||||
AIMessageChunk(
|
||||
content="", additional_kwargs={"function_call": {"arguments": ","}}
|
||||
),
|
||||
AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"function_call": {"arguments": '\n "destination_path": "bar"\n}'}
|
||||
},
|
||||
),
|
||||
]
|
||||
|
||||
accumulate_chunks = None
|
||||
for chunk in chunks:
|
||||
if accumulate_chunks is None:
|
||||
accumulate_chunks = chunk
|
||||
else:
|
||||
accumulate_chunks += chunk
|
||||
|
||||
assert accumulate_chunks == AIMessageChunk(
|
||||
content="",
|
||||
additional_kwargs={
|
||||
"function_call": {
|
||||
"name": "move_file",
|
||||
"arguments": '{\n "source_path": "foo",\n "'
|
||||
'destination_path": "bar"\n}',
|
||||
}
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
async def test_generic_fake_chat_model_astream_log() -> None:
|
||||
"""Test streaming."""
|
||||
infinite_cycle = cycle([AIMessage(content="hello goodbye")])
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
log_patches = [
|
||||
log_patch async for log_patch in model.astream_log("meow", diff=False)
|
||||
]
|
||||
final = log_patches[-1]
|
||||
assert final.state["streamed_output"] == [
|
||||
AIMessageChunk(content="hello"),
|
||||
AIMessageChunk(content=" "),
|
||||
AIMessageChunk(content="goodbye"),
|
||||
]
|
||||
|
||||
|
||||
async def test_callback_handlers() -> None:
|
||||
"""Verify that model is implemented correctly with handlers working."""
|
||||
|
||||
class MyCustomAsyncHandler(AsyncCallbackHandler):
|
||||
def __init__(self, store: List[str]) -> None:
|
||||
self.store = store
|
||||
|
||||
async def on_chat_model_start(
|
||||
self,
|
||||
serialized: Dict[str, Any],
|
||||
messages: List[List[BaseMessage]],
|
||||
*,
|
||||
run_id: UUID,
|
||||
parent_run_id: Optional[UUID] = None,
|
||||
tags: Optional[List[str]] = None,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
**kwargs: Any,
|
||||
) -> Any:
|
||||
# Do nothing
|
||||
# Required to implement since this is an abstract method
|
||||
pass
|
||||
|
||||
async def on_llm_new_token(
|
||||
self,
|
||||
token: str,
|
||||
*,
|
||||
chunk: Optional[Union[GenerationChunk, ChatGenerationChunk]] = None,
|
||||
run_id: UUID,
|
||||
parent_run_id: Optional[UUID] = None,
|
||||
tags: Optional[List[str]] = None,
|
||||
**kwargs: Any,
|
||||
) -> None:
|
||||
self.store.append(token)
|
||||
|
||||
infinite_cycle = cycle(
|
||||
[
|
||||
AIMessage(content="hello goodbye"),
|
||||
]
|
||||
)
|
||||
model = GenericFakeChatModel(messages=infinite_cycle)
|
||||
tokens: List[str] = []
|
||||
# New model
|
||||
results = list(model.stream("meow", {"callbacks": [MyCustomAsyncHandler(tokens)]}))
|
||||
assert results == [
|
||||
AIMessageChunk(content="hello"),
|
||||
AIMessageChunk(content=" "),
|
||||
AIMessageChunk(content="goodbye"),
|
||||
]
|
||||
assert tokens == ["hello", " ", "goodbye"]
|
||||
@@ -42,8 +42,8 @@ from langchain_core.outputs import (
|
||||
ChatGenerationChunk,
|
||||
ChatResult,
|
||||
)
|
||||
from langchain_core.pydantic_v1 import root_validator
|
||||
from langchain_core.utils import get_from_dict_or_env
|
||||
from langchain_core.pydantic_v1 import SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
|
||||
# TODO: Remove 'type: ignore' once mistralai has stubs or py.typed marker.
|
||||
from mistralai.async_client import MistralAsyncClient # type: ignore[import]
|
||||
@@ -111,18 +111,11 @@ async def acompletion_with_retry(
|
||||
|
||||
@retry_decorator
|
||||
async def _completion_with_retry(**kwargs: Any) -> Any:
|
||||
client = MistralAsyncClient(
|
||||
api_key=llm.mistral_api_key,
|
||||
endpoint=llm.endpoint,
|
||||
max_retries=llm.max_retries,
|
||||
timeout=llm.timeout,
|
||||
max_concurrent_requests=llm.max_concurrent_requests,
|
||||
)
|
||||
stream = kwargs.pop("stream", False)
|
||||
if stream:
|
||||
return client.chat_stream(**kwargs)
|
||||
return llm.async_client.chat_stream(**kwargs)
|
||||
else:
|
||||
return await client.chat(**kwargs)
|
||||
return await llm.async_client.chat(**kwargs)
|
||||
|
||||
return await _completion_with_retry(**kwargs)
|
||||
|
||||
@@ -163,8 +156,9 @@ def _convert_message_to_mistral_chat_message(
|
||||
class ChatMistralAI(BaseChatModel):
|
||||
"""A chat model that uses the MistralAI API."""
|
||||
|
||||
client: Any #: :meta private:
|
||||
mistral_api_key: Optional[str] = None
|
||||
client: MistralClient = None #: :meta private:
|
||||
async_client: MistralAsyncClient = None #: :meta private:
|
||||
mistral_api_key: Optional[SecretStr] = None
|
||||
endpoint: str = DEFAULT_MISTRAL_ENDPOINT
|
||||
max_retries: int = 5
|
||||
timeout: int = 120
|
||||
@@ -224,15 +218,24 @@ class ChatMistralAI(BaseChatModel):
|
||||
"Please install it with `pip install mistralai`"
|
||||
)
|
||||
|
||||
values["mistral_api_key"] = get_from_dict_or_env(
|
||||
values, "mistral_api_key", "MISTRAL_API_KEY", default=""
|
||||
values["mistral_api_key"] = convert_to_secret_str(
|
||||
get_from_dict_or_env(
|
||||
values, "mistral_api_key", "MISTRAL_API_KEY", default=""
|
||||
)
|
||||
)
|
||||
values["client"] = MistralClient(
|
||||
api_key=values["mistral_api_key"],
|
||||
api_key=values["mistral_api_key"].get_secret_value(),
|
||||
endpoint=values["endpoint"],
|
||||
max_retries=values["max_retries"],
|
||||
timeout=values["timeout"],
|
||||
)
|
||||
values["async_client"] = MistralAsyncClient(
|
||||
api_key=values["mistral_api_key"].get_secret_value(),
|
||||
endpoint=values["endpoint"],
|
||||
max_retries=values["max_retries"],
|
||||
timeout=values["timeout"],
|
||||
max_concurrent_requests=values["max_concurrent_requests"],
|
||||
)
|
||||
|
||||
if values["temperature"] is not None and not 0 <= values["temperature"] <= 1:
|
||||
raise ValueError("temperature must be in the range [0.0, 1.0]")
|
||||
@@ -286,10 +289,12 @@ class ChatMistralAI(BaseChatModel):
|
||||
self, messages: List[BaseMessage], stop: Optional[List[str]]
|
||||
) -> Tuple[List[MistralChatMessage], Dict[str, Any]]:
|
||||
params = self._client_params
|
||||
if stop is not None:
|
||||
if stop is not None or "stop" in params:
|
||||
if "stop" in params:
|
||||
raise ValueError("`stop` found in both the input and default params.")
|
||||
params["stop"] = stop
|
||||
params.pop("stop")
|
||||
logger.warning(
|
||||
"Parameter `stop` not yet supported (https://docs.mistral.ai/api)"
|
||||
)
|
||||
message_dicts = [_convert_message_to_mistral_chat_message(m) for m in messages]
|
||||
return message_dicts, params
|
||||
|
||||
@@ -316,7 +321,7 @@ class ChatMistralAI(BaseChatModel):
|
||||
default_chunk_class = chunk.__class__
|
||||
yield ChatGenerationChunk(message=chunk)
|
||||
if run_manager:
|
||||
run_manager.on_llm_new_token(chunk.content)
|
||||
run_manager.on_llm_new_token(token=chunk.content, chunk=chunk)
|
||||
|
||||
async def _astream(
|
||||
self,
|
||||
@@ -341,7 +346,7 @@ class ChatMistralAI(BaseChatModel):
|
||||
default_chunk_class = chunk.__class__
|
||||
yield ChatGenerationChunk(message=chunk)
|
||||
if run_manager:
|
||||
await run_manager.on_llm_new_token(chunk.content)
|
||||
await run_manager.on_llm_new_token(token=chunk.content, chunk=chunk)
|
||||
|
||||
async def _agenerate(
|
||||
self,
|
||||
|
||||
@@ -1 +1,56 @@
|
||||
# langchain-openai
|
||||
|
||||
This package contains the LangChain integrations for OpenAI through their `openai` SDK.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
- Install the LangChain partner package
|
||||
```bash
|
||||
pip install langchain-openai
|
||||
```
|
||||
- Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
|
||||
|
||||
|
||||
## LLM
|
||||
|
||||
See a [usage example](http://python.langchain.com/docs/integrations/llms/openai).
|
||||
|
||||
```python
|
||||
from langchain_openai import OpenAI
|
||||
```
|
||||
|
||||
If you are using a model hosted on `Azure`, you should use different wrapper for that:
|
||||
```python
|
||||
from langchain_openai import AzureOpenAI
|
||||
```
|
||||
For a more detailed walkthrough of the `Azure` wrapper, see [here](http://python.langchain.com/docs/integrations/llms/azure_openai)
|
||||
|
||||
|
||||
## Chat model
|
||||
|
||||
See a [usage example](http://python.langchain.com/docs/integrations/chat/openai).
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
```
|
||||
|
||||
If you are using a model hosted on `Azure`, you should use different wrapper for that:
|
||||
```python
|
||||
from langchain_openai import AzureChatOpenAI
|
||||
```
|
||||
For a more detailed walkthrough of the `Azure` wrapper, see [here](http://python.langchain.com/docs/integrations/chat/azure_chat_openai)
|
||||
|
||||
|
||||
## Text Embedding Model
|
||||
|
||||
See a [usage example](http://python.langchain.com/docs/integrations/text_embedding/openai)
|
||||
|
||||
```python
|
||||
from langchain_openai import OpenAIEmbeddings
|
||||
```
|
||||
|
||||
If you are using a model hosted on `Azure`, you should use different wrapper for that:
|
||||
```python
|
||||
from langchain_openai import AzureOpenAIEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of the `Azure` wrapper, see [here](https://python.langchain.com/docs/integrations/text_embedding/azureopenai)
|
||||
46
libs/partners/openai/tests/unit_tests/test_load.py
Normal file
46
libs/partners/openai/tests/unit_tests/test_load.py
Normal file
@@ -0,0 +1,46 @@
|
||||
from langchain_core.load.dump import dumpd, dumps
|
||||
from langchain_core.load.load import load, loads
|
||||
|
||||
from langchain_openai import ChatOpenAI, OpenAI
|
||||
|
||||
|
||||
def test_loads_openai_llm() -> None:
|
||||
llm = OpenAI(model="davinci", temperature=0.5, openai_api_key="hello")
|
||||
llm_string = dumps(llm)
|
||||
llm2 = loads(llm_string, secrets_map={"OPENAI_API_KEY": "hello"})
|
||||
|
||||
assert llm2 == llm
|
||||
llm_string_2 = dumps(llm2)
|
||||
assert llm_string_2 == llm_string
|
||||
assert isinstance(llm2, OpenAI)
|
||||
|
||||
|
||||
def test_load_openai_llm() -> None:
|
||||
llm = OpenAI(model="davinci", temperature=0.5, openai_api_key="hello")
|
||||
llm_obj = dumpd(llm)
|
||||
llm2 = load(llm_obj, secrets_map={"OPENAI_API_KEY": "hello"})
|
||||
|
||||
assert llm2 == llm
|
||||
assert dumpd(llm2) == llm_obj
|
||||
assert isinstance(llm2, OpenAI)
|
||||
|
||||
|
||||
def test_loads_openai_chat() -> None:
|
||||
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5, openai_api_key="hello")
|
||||
llm_string = dumps(llm)
|
||||
llm2 = loads(llm_string, secrets_map={"OPENAI_API_KEY": "hello"})
|
||||
|
||||
assert llm2 == llm
|
||||
llm_string_2 = dumps(llm2)
|
||||
assert llm_string_2 == llm_string
|
||||
assert isinstance(llm2, ChatOpenAI)
|
||||
|
||||
|
||||
def test_load_openai_chat() -> None:
|
||||
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.5, openai_api_key="hello")
|
||||
llm_obj = dumpd(llm)
|
||||
llm2 = load(llm_obj, secrets_map={"OPENAI_API_KEY": "hello"})
|
||||
|
||||
assert llm2 == llm
|
||||
assert dumpd(llm2) == llm_obj
|
||||
assert isinstance(llm2, ChatOpenAI)
|
||||
@@ -10,4 +10,4 @@ pip install -U langchain-robocorp
|
||||
|
||||
## Action Server Toolkit
|
||||
|
||||
See [ActionServerToolkit](./docs/toolkit.ipynb) for detailed documentation.
|
||||
See [ActionServerToolkit](https://python.langchain.com/docs/integrations/toolkits/robocorp) for detailed documentation.
|
||||
|
||||
12
libs/partners/robocorp/poetry.lock
generated
12
libs/partners/robocorp/poetry.lock
generated
@@ -251,7 +251,7 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-core"
|
||||
version = "0.1.4"
|
||||
version = "0.1.8"
|
||||
description = "Building applications with LLMs through composability"
|
||||
optional = false
|
||||
python-versions = ">=3.8.1,<4.0"
|
||||
@@ -611,7 +611,6 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69b023b2b4daa7548bcfbd4aa3da05b3a74b772db9e23b982788168117739938"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:81e0b275a9ecc9c0c0c07b4b90ba548307583c125f54d5b6946cfee6360c733d"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba336e390cd8e4d1739f42dfe9bb83a3cc2e80f567d8805e11b46f4a943f5515"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:326c013efe8048858a6d312ddd31d56e468118ad4cdeda36c719bf5bb6192290"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-win32.whl", hash = "sha256:bd4af7373a854424dabd882decdc5579653d7868b8fb26dc7d0e99f823aa5924"},
|
||||
{file = "PyYAML-6.0.1-cp310-cp310-win_amd64.whl", hash = "sha256:fd1592b3fdf65fff2ad0004b5e363300ef59ced41c2e6b3a99d4089fa8c5435d"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6965a7bc3cf88e5a1c3bd2e0b5c22f8d677dc88a455344035f03399034eb3007"},
|
||||
@@ -619,15 +618,8 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42f8152b8dbc4fe7d96729ec2b99c7097d656dc1213a3229ca5383f973a5ed6d"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:062582fca9fabdd2c8b54a3ef1c978d786e0f6b3a1510e0ac93ef59e0ddae2bc"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d2b04aac4d386b172d5b9692e2d2da8de7bfb6c387fa4f801fbf6fb2e6ba4673"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:e7d73685e87afe9f3b36c799222440d6cf362062f78be1013661b00c5c6f678b"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-win32.whl", hash = "sha256:1635fd110e8d85d55237ab316b5b011de701ea0f29d07611174a1b42f1444741"},
|
||||
{file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"},
|
||||
{file = "PyYAML-6.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:0d3304d8c0adc42be59c5f8a4d9e3d7379e6955ad754aa9d6ab7a398b59dd1df"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50550eb667afee136e9a77d6dc71ae76a44df8b3e51e41b77f6de2932bfe0f47"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1fe35611261b29bd1de0070f0b2f47cb6ff71fa6595c077e42bd0c419fa27b98"},
|
||||
{file = "PyYAML-6.0.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:704219a11b772aea0d8ecd7058d0082713c3562b4e271b849ad7dc4a5c90c13c"},
|
||||
@@ -644,7 +636,6 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0cd17c15d3bb3fa06978b4e8958dcdc6e0174ccea823003a106c7d4d7899ac5"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:28c119d996beec18c05208a8bd78cbe4007878c6dd15091efb73a30e90539696"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7e07cbde391ba96ab58e532ff4803f79c4129397514e1413a7dc761ccd755735"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:49a183be227561de579b4a36efbb21b3eab9651dd81b1858589f796549873dd6"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-win32.whl", hash = "sha256:184c5108a2aca3c5b3d3bf9395d50893a7ab82a38004c8f61c258d4428e80206"},
|
||||
{file = "PyYAML-6.0.1-cp38-cp38-win_amd64.whl", hash = "sha256:1e2722cc9fbb45d9b87631ac70924c11d3a401b2d7f410cc0e3bbf249f2dca62"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:9eb6caa9a297fc2c2fb8862bc5370d0303ddba53ba97e71f08023b6cd73d16a8"},
|
||||
@@ -652,7 +643,6 @@ files = [
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5773183b6446b2c99bb77e77595dd486303b4faab2b086e7b17bc6bef28865f6"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b786eecbdf8499b9ca1d697215862083bd6d2a99965554781d0d8d1ad31e13a0"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc1bf2925a1ecd43da378f4db9e4f799775d6367bdb94671027b73b393a7c42c"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04ac92ad1925b2cff1db0cfebffb6ffc43457495c9b3c39d3fcae417d7125dc5"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-win32.whl", hash = "sha256:faca3bdcf85b2fc05d06ff3fbc1f83e1391b3e724afa3feba7d13eeab355484c"},
|
||||
{file = "PyYAML-6.0.1-cp39-cp39-win_amd64.whl", hash = "sha256:510c9deebc5c0225e8c96813043e62b680ba2f9c50a08d3724c7f28a747d1486"},
|
||||
{file = "PyYAML-6.0.1.tar.gz", hash = "sha256:bfdf460b1736c775f2ba9f6a92bca30bc2095067b8a9d77876d1fad6cc3b4a43"},
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "langchain-robocorp"
|
||||
version = "0.0.1"
|
||||
version = "0.0.1.post1"
|
||||
description = "An integration package connecting Robocorp and LangChain"
|
||||
authors = []
|
||||
readme = "README.md"
|
||||
|
||||
@@ -1,5 +1,9 @@
|
||||
from langchain_together.embeddings import TogetherEmbeddings
|
||||
from langchain_together.llms import Together
|
||||
from langchain_together.version import __version__
|
||||
|
||||
__all__ = [
|
||||
"__version__",
|
||||
"Together",
|
||||
"TogetherEmbeddings",
|
||||
]
|
||||
|
||||
204
libs/partners/together/langchain_together/llms.py
Normal file
204
libs/partners/together/langchain_together/llms.py
Normal file
@@ -0,0 +1,204 @@
|
||||
"""Wrapper around Together AI's Completion API."""
|
||||
import logging
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import requests
|
||||
from aiohttp import ClientSession
|
||||
from langchain_core.callbacks import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
CallbackManagerForLLMRun,
|
||||
)
|
||||
from langchain_core.language_models.llms import LLM
|
||||
from langchain_core.pydantic_v1 import Extra, SecretStr, root_validator
|
||||
from langchain_core.utils import convert_to_secret_str, get_from_dict_or_env
|
||||
|
||||
from langchain_together.version import __version__
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Together(LLM):
|
||||
"""LLM models from `Together`.
|
||||
|
||||
To use, you'll need an API key which you can find here:
|
||||
https://api.together.xyz/settings/api-keys. This can be passed in as init param
|
||||
``together_api_key`` or set as environment variable ``TOGETHER_API_KEY``.
|
||||
|
||||
Together AI API reference: https://docs.together.ai/reference/inference
|
||||
"""
|
||||
|
||||
base_url: str = "https://api.together.xyz/inference"
|
||||
"""Base inference API URL."""
|
||||
together_api_key: SecretStr
|
||||
"""Together AI API key. Get it here: https://api.together.xyz/settings/api-keys"""
|
||||
model: str
|
||||
"""Model name. Available models listed here:
|
||||
https://docs.together.ai/docs/inference-models
|
||||
"""
|
||||
temperature: Optional[float] = None
|
||||
"""Model temperature."""
|
||||
top_p: Optional[float] = None
|
||||
"""Used to dynamically adjust the number of choices for each predicted token based
|
||||
on the cumulative probabilities. A value of 1 will always yield the same
|
||||
output. A temperature less than 1 favors more correctness and is appropriate
|
||||
for question answering or summarization. A value greater than 1 introduces more
|
||||
randomness in the output.
|
||||
"""
|
||||
top_k: Optional[int] = None
|
||||
"""Used to limit the number of choices for the next predicted word or token. It
|
||||
specifies the maximum number of tokens to consider at each step, based on their
|
||||
probability of occurrence. This technique helps to speed up the generation
|
||||
process and can improve the quality of the generated text by focusing on the
|
||||
most likely options.
|
||||
"""
|
||||
max_tokens: Optional[int] = None
|
||||
"""The maximum number of tokens to generate."""
|
||||
repetition_penalty: Optional[float] = None
|
||||
"""A number that controls the diversity of generated text by reducing the
|
||||
likelihood of repeated sequences. Higher values decrease repetition.
|
||||
"""
|
||||
logprobs: Optional[int] = None
|
||||
"""An integer that specifies how many top token log probabilities are included in
|
||||
the response for each token generation step.
|
||||
"""
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
|
||||
extra = Extra.forbid
|
||||
|
||||
@root_validator(pre=True)
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key exists in environment."""
|
||||
values["together_api_key"] = convert_to_secret_str(
|
||||
get_from_dict_or_env(values, "together_api_key", "TOGETHER_API_KEY")
|
||||
)
|
||||
return values
|
||||
|
||||
@property
|
||||
def _llm_type(self) -> str:
|
||||
"""Return type of model."""
|
||||
return "together"
|
||||
|
||||
def _format_output(self, output: dict) -> str:
|
||||
return output["output"]["choices"][0]["text"]
|
||||
|
||||
@staticmethod
|
||||
def get_user_agent() -> str:
|
||||
return f"langchain-together/{__version__}"
|
||||
|
||||
@property
|
||||
def default_params(self) -> Dict[str, Any]:
|
||||
return {
|
||||
"model": self.model,
|
||||
"temperature": self.temperature,
|
||||
"top_p": self.top_p,
|
||||
"top_k": self.top_k,
|
||||
"max_tokens": self.max_tokens,
|
||||
"repetition_penalty": self.repetition_penalty,
|
||||
}
|
||||
|
||||
def _call(
|
||||
self,
|
||||
prompt: str,
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""Call out to Together's text generation endpoint.
|
||||
|
||||
Args:
|
||||
prompt: The prompt to pass into the model.
|
||||
|
||||
Returns:
|
||||
The string generated by the model..
|
||||
"""
|
||||
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.together_api_key.get_secret_value()}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
stop_to_use = stop[0] if stop and len(stop) == 1 else stop
|
||||
payload: Dict[str, Any] = {
|
||||
**self.default_params,
|
||||
"prompt": prompt,
|
||||
"stop": stop_to_use,
|
||||
**kwargs,
|
||||
}
|
||||
|
||||
# filter None values to not pass them to the http payload
|
||||
payload = {k: v for k, v in payload.items() if v is not None}
|
||||
response = requests.post(url=self.base_url, json=payload, headers=headers)
|
||||
|
||||
if response.status_code >= 500:
|
||||
raise Exception(f"Together Server: Error {response.status_code}")
|
||||
elif response.status_code >= 400:
|
||||
raise ValueError(f"Together received an invalid payload: {response.text}")
|
||||
elif response.status_code != 200:
|
||||
raise Exception(
|
||||
f"Together returned an unexpected response with status "
|
||||
f"{response.status_code}: {response.text}"
|
||||
)
|
||||
|
||||
data = response.json()
|
||||
if data.get("status") != "finished":
|
||||
err_msg = data.get("error", "Undefined Error")
|
||||
raise Exception(err_msg)
|
||||
|
||||
output = self._format_output(data)
|
||||
|
||||
return output
|
||||
|
||||
async def _acall(
|
||||
self,
|
||||
prompt: str,
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""Call Together model to get predictions based on the prompt.
|
||||
|
||||
Args:
|
||||
prompt: The prompt to pass into the model.
|
||||
|
||||
Returns:
|
||||
The string generated by the model.
|
||||
"""
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.together_api_key.get_secret_value()}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
stop_to_use = stop[0] if stop and len(stop) == 1 else stop
|
||||
payload: Dict[str, Any] = {
|
||||
**self.default_params,
|
||||
"prompt": prompt,
|
||||
"stop": stop_to_use,
|
||||
**kwargs,
|
||||
}
|
||||
|
||||
# filter None values to not pass them to the http payload
|
||||
payload = {k: v for k, v in payload.items() if v is not None}
|
||||
async with ClientSession() as session:
|
||||
async with session.post(
|
||||
self.base_url, json=payload, headers=headers
|
||||
) as response:
|
||||
if response.status >= 500:
|
||||
raise Exception(f"Together Server: Error {response.status}")
|
||||
elif response.status >= 400:
|
||||
raise ValueError(
|
||||
f"Together received an invalid payload: {response.text}"
|
||||
)
|
||||
elif response.status != 200:
|
||||
raise Exception(
|
||||
f"Together returned an unexpected response with status "
|
||||
f"{response.status}: {response.text}"
|
||||
)
|
||||
|
||||
response_json = await response.json()
|
||||
|
||||
if response_json.get("status") != "finished":
|
||||
err_msg = response_json.get("error", "Undefined Error")
|
||||
raise Exception(err_msg)
|
||||
|
||||
output = self._format_output(response_json)
|
||||
return output
|
||||
8
libs/partners/together/langchain_together/version.py
Normal file
8
libs/partners/together/langchain_together/version.py
Normal file
@@ -0,0 +1,8 @@
|
||||
"""Main entrypoint into package."""
|
||||
from importlib import metadata
|
||||
|
||||
try:
|
||||
__version__ = metadata.version(__package__)
|
||||
except metadata.PackageNotFoundError:
|
||||
# Case where package metadata is not available.
|
||||
__version__ = ""
|
||||
432
libs/partners/together/poetry.lock
generated
432
libs/partners/together/poetry.lock
generated
@@ -1,4 +1,114 @@
|
||||
# This file is automatically @generated by Poetry 1.6.1 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "aiohttp"
|
||||
version = "3.9.1"
|
||||
description = "Async http client/server framework (asyncio)"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e1f80197f8b0b846a8d5cf7b7ec6084493950d0882cc5537fb7b96a69e3c8590"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c72444d17777865734aa1a4d167794c34b63e5883abb90356a0364a28904e6c0"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9b05d5cbe9dafcdc733262c3a99ccf63d2f7ce02543620d2bd8db4d4f7a22f83"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c4fa235d534b3547184831c624c0b7c1e262cd1de847d95085ec94c16fddcd5"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:289ba9ae8e88d0ba16062ecf02dd730b34186ea3b1e7489046fc338bdc3361c4"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bff7e2811814fa2271be95ab6e84c9436d027a0e59665de60edf44e529a42c1f"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:81b77f868814346662c96ab36b875d7814ebf82340d3284a31681085c051320f"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3b9c7426923bb7bd66d409da46c41e3fb40f5caf679da624439b9eba92043fa6"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:8d44e7bf06b0c0a70a20f9100af9fcfd7f6d9d3913e37754c12d424179b4e48f"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:22698f01ff5653fe66d16ffb7658f582a0ac084d7da1323e39fd9eab326a1f26"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:ca7ca5abfbfe8d39e653870fbe8d7710be7a857f8a8386fc9de1aae2e02ce7e4"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:8d7f98fde213f74561be1d6d3fa353656197f75d4edfbb3d94c9eb9b0fc47f5d"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:5216b6082c624b55cfe79af5d538e499cd5f5b976820eac31951fb4325974501"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-win32.whl", hash = "sha256:0e7ba7ff228c0d9a2cd66194e90f2bca6e0abca810b786901a569c0de082f489"},
|
||||
{file = "aiohttp-3.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:c7e939f1ae428a86e4abbb9a7c4732bf4706048818dfd979e5e2839ce0159f23"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:df9cf74b9bc03d586fc53ba470828d7b77ce51b0582d1d0b5b2fb673c0baa32d"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ecca113f19d5e74048c001934045a2b9368d77b0b17691d905af18bd1c21275e"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:8cef8710fb849d97c533f259103f09bac167a008d7131d7b2b0e3a33269185c0"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bea94403a21eb94c93386d559bce297381609153e418a3ffc7d6bf772f59cc35"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91c742ca59045dce7ba76cab6e223e41d2c70d79e82c284a96411f8645e2afff"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6c93b7c2e52061f0925c3382d5cb8980e40f91c989563d3d32ca280069fd6a87"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ee2527134f95e106cc1653e9ac78846f3a2ec1004cf20ef4e02038035a74544d"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:11ff168d752cb41e8492817e10fb4f85828f6a0142b9726a30c27c35a1835f01"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b8c3a67eb87394386847d188996920f33b01b32155f0a94f36ca0e0c635bf3e3"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:c7b5d5d64e2a14e35a9240b33b89389e0035e6de8dbb7ffa50d10d8b65c57449"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:69985d50a2b6f709412d944ffb2e97d0be154ea90600b7a921f95a87d6f108a2"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:c9110c06eaaac7e1f5562caf481f18ccf8f6fdf4c3323feab28a93d34cc646bd"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d737e69d193dac7296365a6dcb73bbbf53bb760ab25a3727716bbd42022e8d7a"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-win32.whl", hash = "sha256:4ee8caa925aebc1e64e98432d78ea8de67b2272252b0a931d2ac3bd876ad5544"},
|
||||
{file = "aiohttp-3.9.1-cp311-cp311-win_amd64.whl", hash = "sha256:a34086c5cc285be878622e0a6ab897a986a6e8bf5b67ecb377015f06ed316587"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:f800164276eec54e0af5c99feb9494c295118fc10a11b997bbb1348ba1a52065"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:500f1c59906cd142d452074f3811614be04819a38ae2b3239a48b82649c08821"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0b0a6a36ed7e164c6df1e18ee47afbd1990ce47cb428739d6c99aaabfaf1b3af"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:69da0f3ed3496808e8cbc5123a866c41c12c15baaaead96d256477edf168eb57"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:176df045597e674fa950bf5ae536be85699e04cea68fa3a616cf75e413737eb5"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b796b44111f0cab6bbf66214186e44734b5baab949cb5fb56154142a92989aeb"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f27fdaadce22f2ef950fc10dcdf8048407c3b42b73779e48a4e76b3c35bca26c"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bcb6532b9814ea7c5a6a3299747c49de30e84472fa72821b07f5a9818bce0f66"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:54631fb69a6e44b2ba522f7c22a6fb2667a02fd97d636048478db2fd8c4e98fe"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:4b4c452d0190c5a820d3f5c0f3cd8a28ace48c54053e24da9d6041bf81113183"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:cae4c0c2ca800c793cae07ef3d40794625471040a87e1ba392039639ad61ab5b"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:565760d6812b8d78d416c3c7cfdf5362fbe0d0d25b82fed75d0d29e18d7fc30f"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:54311eb54f3a0c45efb9ed0d0a8f43d1bc6060d773f6973efd90037a51cd0a3f"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-win32.whl", hash = "sha256:85c3e3c9cb1d480e0b9a64c658cd66b3cfb8e721636ab8b0e746e2d79a7a9eed"},
|
||||
{file = "aiohttp-3.9.1-cp312-cp312-win_amd64.whl", hash = "sha256:11cb254e397a82efb1805d12561e80124928e04e9c4483587ce7390b3866d213"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:8a22a34bc594d9d24621091d1b91511001a7eea91d6652ea495ce06e27381f70"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:598db66eaf2e04aa0c8900a63b0101fdc5e6b8a7ddd805c56d86efb54eb66672"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:2c9376e2b09895c8ca8b95362283365eb5c03bdc8428ade80a864160605715f1"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41473de252e1797c2d2293804e389a6d6986ef37cbb4a25208de537ae32141dd"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9c5857612c9813796960c00767645cb5da815af16dafb32d70c72a8390bbf690"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ffcd828e37dc219a72c9012ec44ad2e7e3066bec6ff3aaa19e7d435dbf4032ca"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:219a16763dc0294842188ac8a12262b5671817042b35d45e44fd0a697d8c8361"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f694dc8a6a3112059258a725a4ebe9acac5fe62f11c77ac4dcf896edfa78ca28"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:bcc0ea8d5b74a41b621ad4a13d96c36079c81628ccc0b30cfb1603e3dfa3a014"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:90ec72d231169b4b8d6085be13023ece8fa9b1bb495e4398d847e25218e0f431"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:cf2a0ac0615842b849f40c4d7f304986a242f1e68286dbf3bd7a835e4f83acfd"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:0e49b08eafa4f5707ecfb321ab9592717a319e37938e301d462f79b4e860c32a"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2c59e0076ea31c08553e868cec02d22191c086f00b44610f8ab7363a11a5d9d8"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-win32.whl", hash = "sha256:4831df72b053b1eed31eb00a2e1aff6896fb4485301d4ccb208cac264b648db4"},
|
||||
{file = "aiohttp-3.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:3135713c5562731ee18f58d3ad1bf41e1d8883eb68b363f2ffde5b2ea4b84cc7"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:cfeadf42840c1e870dc2042a232a8748e75a36b52d78968cda6736de55582766"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:70907533db712f7aa791effb38efa96f044ce3d4e850e2d7691abd759f4f0ae0"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:cdefe289681507187e375a5064c7599f52c40343a8701761c802c1853a504558"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7481f581251bb5558ba9f635db70908819caa221fc79ee52a7f58392778c636"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:49f0c1b3c2842556e5de35f122fc0f0b721334ceb6e78c3719693364d4af8499"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0d406b01a9f5a7e232d1b0d161b40c05275ffbcbd772dc18c1d5a570961a1ca4"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d8e4450e7fe24d86e86b23cc209e0023177b6d59502e33807b732d2deb6975f"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3c0266cd6f005e99f3f51e583012de2778e65af6b73860038b968a0a8888487a"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ab221850108a4a063c5b8a70f00dd7a1975e5a1713f87f4ab26a46e5feac5a0e"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c88a15f272a0ad3d7773cf3a37cc7b7d077cbfc8e331675cf1346e849d97a4e5"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:237533179d9747080bcaad4d02083ce295c0d2eab3e9e8ce103411a4312991a0"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:02ab6006ec3c3463b528374c4cdce86434e7b89ad355e7bf29e2f16b46c7dd6f"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:04fa38875e53eb7e354ece1607b1d2fdee2d175ea4e4d745f6ec9f751fe20c7c"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-win32.whl", hash = "sha256:82eefaf1a996060602f3cc1112d93ba8b201dbf5d8fd9611227de2003dddb3b7"},
|
||||
{file = "aiohttp-3.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:9b05d33ff8e6b269e30a7957bd3244ffbce2a7a35a81b81c382629b80af1a8bf"},
|
||||
{file = "aiohttp-3.9.1.tar.gz", hash = "sha256:8fc49a87ac269d4529da45871e2ffb6874e87779c3d0e2ccd813c0899221239d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
aiosignal = ">=1.1.2"
|
||||
async-timeout = {version = ">=4.0,<5.0", markers = "python_version < \"3.11\""}
|
||||
attrs = ">=17.3.0"
|
||||
frozenlist = ">=1.1.1"
|
||||
multidict = ">=4.5,<7.0"
|
||||
yarl = ">=1.0,<2.0"
|
||||
|
||||
[package.extras]
|
||||
speedups = ["Brotli", "aiodns", "brotlicffi"]
|
||||
|
||||
[[package]]
|
||||
name = "aiosignal"
|
||||
version = "1.3.1"
|
||||
description = "aiosignal: a list of registered asynchronous callbacks"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "aiosignal-1.3.1-py3-none-any.whl", hash = "sha256:f8376fb07dd1e86a584e4fcdec80b36b7f81aac666ebc724e2c090300dd83b17"},
|
||||
{file = "aiosignal-1.3.1.tar.gz", hash = "sha256:54cd96e15e1649b75d6c87526a6ff0b6c1b0dd3459f43d9ca11d48c339b68cfc"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
frozenlist = ">=1.1.0"
|
||||
|
||||
[[package]]
|
||||
name = "annotated-types"
|
||||
@@ -36,6 +146,36 @@ doc = ["Sphinx (>=7)", "packaging", "sphinx-autodoc-typehints (>=1.2.0)", "sphin
|
||||
test = ["anyio[trio]", "coverage[toml] (>=7)", "exceptiongroup (>=1.2.0)", "hypothesis (>=4.0)", "psutil (>=5.9)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "uvloop (>=0.17)"]
|
||||
trio = ["trio (>=0.23)"]
|
||||
|
||||
[[package]]
|
||||
name = "async-timeout"
|
||||
version = "4.0.3"
|
||||
description = "Timeout context manager for asyncio programs"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "async-timeout-4.0.3.tar.gz", hash = "sha256:4640d96be84d82d02ed59ea2b7105a0f7b33abe8703703cd0ab0bf87c427522f"},
|
||||
{file = "async_timeout-4.0.3-py3-none-any.whl", hash = "sha256:7405140ff1230c310e51dc27b3145b9092d659ce68ff733fb0cefe3ee42be028"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "attrs"
|
||||
version = "23.2.0"
|
||||
description = "Classes Without Boilerplate"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "attrs-23.2.0-py3-none-any.whl", hash = "sha256:99b87a485a5820b23b879f04c2305b44b951b502fd64be915879d77a7e8fc6f1"},
|
||||
{file = "attrs-23.2.0.tar.gz", hash = "sha256:935dc3b529c262f6cf76e50877d35a4bd3c1de194fd41f47a2b7ae8f19971f30"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
cov = ["attrs[tests]", "coverage[toml] (>=5.3)"]
|
||||
dev = ["attrs[tests]", "pre-commit"]
|
||||
docs = ["furo", "myst-parser", "sphinx", "sphinx-notfound-page", "sphinxcontrib-towncrier", "towncrier", "zope-interface"]
|
||||
tests = ["attrs[tests-no-zope]", "zope-interface"]
|
||||
tests-mypy = ["mypy (>=1.6)", "pytest-mypy-plugins"]
|
||||
tests-no-zope = ["attrs[tests-mypy]", "cloudpickle", "hypothesis", "pympler", "pytest (>=4.3.0)", "pytest-xdist[psutil]"]
|
||||
|
||||
[[package]]
|
||||
name = "certifi"
|
||||
version = "2023.11.17"
|
||||
@@ -216,6 +356,92 @@ files = [
|
||||
[package.dependencies]
|
||||
python-dateutil = ">=2.7"
|
||||
|
||||
[[package]]
|
||||
name = "frozenlist"
|
||||
version = "1.4.1"
|
||||
description = "A list-like structure which implements collections.abc.MutableSequence"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:f9aa1878d1083b276b0196f2dfbe00c9b7e752475ed3b682025ff20c1c1f51ac"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:29acab3f66f0f24674b7dc4736477bcd4bc3ad4b896f5f45379a67bce8b96868"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:74fb4bee6880b529a0c6560885fce4dc95936920f9f20f53d99a213f7bf66776"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:590344787a90ae57d62511dd7c736ed56b428f04cd8c161fcc5e7232c130c69a"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:068b63f23b17df8569b7fdca5517edef76171cf3897eb68beb01341131fbd2ad"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5c849d495bf5154cd8da18a9eb15db127d4dba2968d88831aff6f0331ea9bd4c"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9750cc7fe1ae3b1611bb8cfc3f9ec11d532244235d75901fb6b8e42ce9229dfe"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a9b2de4cf0cdd5bd2dee4c4f63a653c61d2408055ab77b151c1957f221cabf2a"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:0633c8d5337cb5c77acbccc6357ac49a1770b8c487e5b3505c57b949b4b82e98"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:27657df69e8801be6c3638054e202a135c7f299267f1a55ed3a598934f6c0d75"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:f9a3ea26252bd92f570600098783d1371354d89d5f6b7dfd87359d669f2109b5"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:4f57dab5fe3407b6c0c1cc907ac98e8a189f9e418f3b6e54d65a718aaafe3950"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:e02a0e11cf6597299b9f3bbd3f93d79217cb90cfd1411aec33848b13f5c656cc"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-win32.whl", hash = "sha256:a828c57f00f729620a442881cc60e57cfcec6842ba38e1b19fd3e47ac0ff8dc1"},
|
||||
{file = "frozenlist-1.4.1-cp310-cp310-win_amd64.whl", hash = "sha256:f56e2333dda1fe0f909e7cc59f021eba0d2307bc6f012a1ccf2beca6ba362439"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:a0cb6f11204443f27a1628b0e460f37fb30f624be6051d490fa7d7e26d4af3d0"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:b46c8ae3a8f1f41a0d2ef350c0b6e65822d80772fe46b653ab6b6274f61d4a49"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:fde5bd59ab5357e3853313127f4d3565fc7dad314a74d7b5d43c22c6a5ed2ced"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:722e1124aec435320ae01ee3ac7bec11a5d47f25d0ed6328f2273d287bc3abb0"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2471c201b70d58a0f0c1f91261542a03d9a5e088ed3dc6c160d614c01649c106"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c757a9dd70d72b076d6f68efdbb9bc943665ae954dad2801b874c8c69e185068"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f146e0911cb2f1da549fc58fc7bcd2b836a44b79ef871980d605ec392ff6b0d2"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f9c515e7914626b2a2e1e311794b4c35720a0be87af52b79ff8e1429fc25f19"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:c302220494f5c1ebeb0912ea782bcd5e2f8308037b3c7553fad0e48ebad6ad82"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:442acde1e068288a4ba7acfe05f5f343e19fac87bfc96d89eb886b0363e977ec"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:1b280e6507ea8a4fa0c0a7150b4e526a8d113989e28eaaef946cc77ffd7efc0a"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:fe1a06da377e3a1062ae5fe0926e12b84eceb8a50b350ddca72dc85015873f74"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:db9e724bebd621d9beca794f2a4ff1d26eed5965b004a97f1f1685a173b869c2"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-win32.whl", hash = "sha256:e774d53b1a477a67838a904131c4b0eef6b3d8a651f8b138b04f748fccfefe17"},
|
||||
{file = "frozenlist-1.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:fb3c2db03683b5767dedb5769b8a40ebb47d6f7f45b1b3e3b4b51ec8ad9d9825"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:1979bc0aeb89b33b588c51c54ab0161791149f2461ea7c7c946d95d5f93b56ae"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:cc7b01b3754ea68a62bd77ce6020afaffb44a590c2289089289363472d13aedb"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:c9c92be9fd329ac801cc420e08452b70e7aeab94ea4233a4804f0915c14eba9b"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c3894db91f5a489fc8fa6a9991820f368f0b3cbdb9cd8849547ccfab3392d86"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ba60bb19387e13597fb059f32cd4d59445d7b18b69a745b8f8e5db0346f33480"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:8aefbba5f69d42246543407ed2461db31006b0f76c4e32dfd6f42215a2c41d09"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:780d3a35680ced9ce682fbcf4cb9c2bad3136eeff760ab33707b71db84664e3a"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9acbb16f06fe7f52f441bb6f413ebae6c37baa6ef9edd49cdd567216da8600cd"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:23b701e65c7b36e4bf15546a89279bd4d8675faabc287d06bbcfac7d3c33e1e6"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:3e0153a805a98f5ada7e09826255ba99fb4f7524bb81bf6b47fb702666484ae1"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:dd9b1baec094d91bf36ec729445f7769d0d0cf6b64d04d86e45baf89e2b9059b"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:1a4471094e146b6790f61b98616ab8e44f72661879cc63fa1049d13ef711e71e"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5667ed53d68d91920defdf4035d1cdaa3c3121dc0b113255124bcfada1cfa1b8"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-win32.whl", hash = "sha256:beee944ae828747fd7cb216a70f120767fc9f4f00bacae8543c14a6831673f89"},
|
||||
{file = "frozenlist-1.4.1-cp312-cp312-win_amd64.whl", hash = "sha256:64536573d0a2cb6e625cf309984e2d873979709f2cf22839bf2d61790b448ad5"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:20b51fa3f588ff2fe658663db52a41a4f7aa6c04f6201449c6c7c476bd255c0d"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:410478a0c562d1a5bcc2f7ea448359fcb050ed48b3c6f6f4f18c313a9bdb1826"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c6321c9efe29975232da3bd0af0ad216800a47e93d763ce64f291917a381b8eb"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:48f6a4533887e189dae092f1cf981f2e3885175f7a0f33c91fb5b7b682b6bab6"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6eb73fa5426ea69ee0e012fb59cdc76a15b1283d6e32e4f8dc4482ec67d1194d"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fbeb989b5cc29e8daf7f976b421c220f1b8c731cbf22b9130d8815418ea45887"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:32453c1de775c889eb4e22f1197fe3bdfe457d16476ea407472b9442e6295f7a"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:693945278a31f2086d9bf3df0fe8254bbeaef1fe71e1351c3bd730aa7d31c41b"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:1d0ce09d36d53bbbe566fe296965b23b961764c0bcf3ce2fa45f463745c04701"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:3a670dc61eb0d0eb7080890c13de3066790f9049b47b0de04007090807c776b0"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:dca69045298ce5c11fd539682cff879cc1e664c245d1c64da929813e54241d11"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:a06339f38e9ed3a64e4c4e43aec7f59084033647f908e4259d279a52d3757d09"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:b7f2f9f912dca3934c1baec2e4585a674ef16fe00218d833856408c48d5beee7"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-win32.whl", hash = "sha256:e7004be74cbb7d9f34553a5ce5fb08be14fb33bc86f332fb71cbe5216362a497"},
|
||||
{file = "frozenlist-1.4.1-cp38-cp38-win_amd64.whl", hash = "sha256:5a7d70357e7cee13f470c7883a063aae5fe209a493c57d86eb7f5a6f910fae09"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:bfa4a17e17ce9abf47a74ae02f32d014c5e9404b6d9ac7f729e01562bbee601e"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b7e3ed87d4138356775346e6845cccbe66cd9e207f3cd11d2f0b9fd13681359d"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c99169d4ff810155ca50b4da3b075cbde79752443117d89429595c2e8e37fed8"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:edb678da49d9f72c9f6c609fbe41a5dfb9a9282f9e6a2253d5a91e0fc382d7c0"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6db4667b187a6742b33afbbaf05a7bc551ffcf1ced0000a571aedbb4aa42fc7b"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:55fdc093b5a3cb41d420884cdaf37a1e74c3c37a31f46e66286d9145d2063bd0"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:82e8211d69a4f4bc360ea22cd6555f8e61a1bd211d1d5d39d3d228b48c83a897"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:89aa2c2eeb20957be2d950b85974b30a01a762f3308cd02bb15e1ad632e22dc7"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9d3e0c25a2350080e9319724dede4f31f43a6c9779be48021a7f4ebde8b2d742"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:7268252af60904bf52c26173cbadc3a071cece75f873705419c8681f24d3edea"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:0c250a29735d4f15321007fb02865f0e6b6a41a6b88f1f523ca1596ab5f50bd5"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:96ec70beabbd3b10e8bfe52616a13561e58fe84c0101dd031dc78f250d5128b9"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:23b2d7679b73fe0e5a4560b672a39f98dfc6f60df63823b0a9970525325b95f6"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-win32.whl", hash = "sha256:a7496bfe1da7fb1a4e1cc23bb67c58fab69311cc7d32b5a99c2007b4b2a0e932"},
|
||||
{file = "frozenlist-1.4.1-cp39-cp39-win_amd64.whl", hash = "sha256:e6a20a581f9ce92d389a8c7d7c3dd47c81fd5d6e655c8dddf341e14aa48659d0"},
|
||||
{file = "frozenlist-1.4.1-py3-none-any.whl", hash = "sha256:04ced3e6a46b4cfffe20f9ae482818e34eba9b5fb0ce4056e4cc9b6e212d09b7"},
|
||||
{file = "frozenlist-1.4.1.tar.gz", hash = "sha256:c037a86e8513059a2613aaba4d817bb90b9d9b6b69aace3ce9c877e8c8ed402b"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "idna"
|
||||
version = "3.6"
|
||||
@@ -265,7 +491,7 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-core"
|
||||
version = "0.1.1"
|
||||
version = "0.1.9"
|
||||
description = "Building applications with LLMs through composability"
|
||||
optional = false
|
||||
python-versions = ">=3.8.1,<4.0"
|
||||
@@ -304,6 +530,89 @@ files = [
|
||||
pydantic = ">=1,<3"
|
||||
requests = ">=2,<3"
|
||||
|
||||
[[package]]
|
||||
name = "multidict"
|
||||
version = "6.0.4"
|
||||
description = "multidict implementation"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "multidict-6.0.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:0b1a97283e0c85772d613878028fec909f003993e1007eafa715b24b377cb9b8"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:eeb6dcc05e911516ae3d1f207d4b0520d07f54484c49dfc294d6e7d63b734171"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d6d635d5209b82a3492508cf5b365f3446afb65ae7ebd755e70e18f287b0adf7"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c048099e4c9e9d615545e2001d3d8a4380bd403e1a0578734e0d31703d1b0c0b"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ea20853c6dbbb53ed34cb4d080382169b6f4554d394015f1bef35e881bf83547"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:16d232d4e5396c2efbbf4f6d4df89bfa905eb0d4dc5b3549d872ab898451f569"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:36c63aaa167f6c6b04ef2c85704e93af16c11d20de1d133e39de6a0e84582a93"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:64bdf1086b6043bf519869678f5f2757f473dee970d7abf6da91ec00acb9cb98"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:43644e38f42e3af682690876cff722d301ac585c5b9e1eacc013b7a3f7b696a0"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7582a1d1030e15422262de9f58711774e02fa80df0d1578995c76214f6954988"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:ddff9c4e225a63a5afab9dd15590432c22e8057e1a9a13d28ed128ecf047bbdc"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:ee2a1ece51b9b9e7752e742cfb661d2a29e7bcdba2d27e66e28a99f1890e4fa0"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a2e4369eb3d47d2034032a26c7a80fcb21a2cb22e1173d761a162f11e562caa5"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-win32.whl", hash = "sha256:574b7eae1ab267e5f8285f0fe881f17efe4b98c39a40858247720935b893bba8"},
|
||||
{file = "multidict-6.0.4-cp310-cp310-win_amd64.whl", hash = "sha256:4dcbb0906e38440fa3e325df2359ac6cb043df8e58c965bb45f4e406ecb162cc"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:0dfad7a5a1e39c53ed00d2dd0c2e36aed4650936dc18fd9a1826a5ae1cad6f03"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:64da238a09d6039e3bd39bb3aee9c21a5e34f28bfa5aa22518581f910ff94af3"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ff959bee35038c4624250473988b24f846cbeb2c6639de3602c073f10410ceba"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:01a3a55bd90018c9c080fbb0b9f4891db37d148a0a18722b42f94694f8b6d4c9"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c5cb09abb18c1ea940fb99360ea0396f34d46566f157122c92dfa069d3e0e982"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:666daae833559deb2d609afa4490b85830ab0dfca811a98b70a205621a6109fe"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11bdf3f5e1518b24530b8241529d2050014c884cf18b6fc69c0c2b30ca248710"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7d18748f2d30f94f498e852c67d61261c643b349b9d2a581131725595c45ec6c"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:458f37be2d9e4c95e2d8866a851663cbc76e865b78395090786f6cd9b3bbf4f4"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:b1a2eeedcead3a41694130495593a559a668f382eee0727352b9a41e1c45759a"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:7d6ae9d593ef8641544d6263c7fa6408cc90370c8cb2bbb65f8d43e5b0351d9c"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:5979b5632c3e3534e42ca6ff856bb24b2e3071b37861c2c727ce220d80eee9ed"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:dcfe792765fab89c365123c81046ad4103fcabbc4f56d1c1997e6715e8015461"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-win32.whl", hash = "sha256:3601a3cece3819534b11d4efc1eb76047488fddd0c85a3948099d5da4d504636"},
|
||||
{file = "multidict-6.0.4-cp311-cp311-win_amd64.whl", hash = "sha256:81a4f0b34bd92df3da93315c6a59034df95866014ac08535fc819f043bfd51f0"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:67040058f37a2a51ed8ea8f6b0e6ee5bd78ca67f169ce6122f3e2ec80dfe9b78"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:853888594621e6604c978ce2a0444a1e6e70c8d253ab65ba11657659dcc9100f"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:39ff62e7d0f26c248b15e364517a72932a611a9b75f35b45be078d81bdb86603"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:af048912e045a2dc732847d33821a9d84ba553f5c5f028adbd364dd4765092ac"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b1e8b901e607795ec06c9e42530788c45ac21ef3aaa11dbd0c69de543bfb79a9"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:62501642008a8b9871ddfccbf83e4222cf8ac0d5aeedf73da36153ef2ec222d2"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:99b76c052e9f1bc0721f7541e5e8c05db3941eb9ebe7b8553c625ef88d6eefde"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:509eac6cf09c794aa27bcacfd4d62c885cce62bef7b2c3e8b2e49d365b5003fe"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:21a12c4eb6ddc9952c415f24eef97e3e55ba3af61f67c7bc388dcdec1404a067"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:5cad9430ab3e2e4fa4a2ef4450f548768400a2ac635841bc2a56a2052cdbeb87"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:ab55edc2e84460694295f401215f4a58597f8f7c9466faec545093045476327d"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-win32.whl", hash = "sha256:5a4dcf02b908c3b8b17a45fb0f15b695bf117a67b76b7ad18b73cf8e92608775"},
|
||||
{file = "multidict-6.0.4-cp37-cp37m-win_amd64.whl", hash = "sha256:6ed5f161328b7df384d71b07317f4d8656434e34591f20552c7bcef27b0ab88e"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:5fc1b16f586f049820c5c5b17bb4ee7583092fa0d1c4e28b5239181ff9532e0c"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:1502e24330eb681bdaa3eb70d6358e818e8e8f908a22a1851dfd4e15bc2f8161"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:b692f419760c0e65d060959df05f2a531945af31fda0c8a3b3195d4efd06de11"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45e1ecb0379bfaab5eef059f50115b54571acfbe422a14f668fc8c27ba410e7e"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ddd3915998d93fbcd2566ddf9cf62cdb35c9e093075f862935573d265cf8f65d"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:59d43b61c59d82f2effb39a93c48b845efe23a3852d201ed2d24ba830d0b4cf2"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc8e1d0c705233c5dd0c5e6460fbad7827d5d36f310a0fadfd45cc3029762258"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d6aa0418fcc838522256761b3415822626f866758ee0bc6632c9486b179d0b52"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6748717bb10339c4760c1e63da040f5f29f5ed6e59d76daee30305894069a660"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:4d1a3d7ef5e96b1c9e92f973e43aa5e5b96c659c9bc3124acbbd81b0b9c8a951"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:4372381634485bec7e46718edc71528024fcdc6f835baefe517b34a33c731d60"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:fc35cb4676846ef752816d5be2193a1e8367b4c1397b74a565a9d0389c433a1d"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:4b9d9e4e2b37daddb5c23ea33a3417901fa7c7b3dee2d855f63ee67a0b21e5b1"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-win32.whl", hash = "sha256:e41b7e2b59679edfa309e8db64fdf22399eec4b0b24694e1b2104fb789207779"},
|
||||
{file = "multidict-6.0.4-cp38-cp38-win_amd64.whl", hash = "sha256:d6c254ba6e45d8e72739281ebc46ea5eb5f101234f3ce171f0e9f5cc86991480"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:16ab77bbeb596e14212e7bab8429f24c1579234a3a462105cda4a66904998664"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:bc779e9e6f7fda81b3f9aa58e3a6091d49ad528b11ed19f6621408806204ad35"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4ceef517eca3e03c1cceb22030a3e39cb399ac86bff4e426d4fc6ae49052cc60"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:281af09f488903fde97923c7744bb001a9b23b039a909460d0f14edc7bf59706"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:52f2dffc8acaba9a2f27174c41c9e57f60b907bb9f096b36b1a1f3be71c6284d"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b41156839806aecb3641f3208c0dafd3ac7775b9c4c422d82ee2a45c34ba81ca"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d5e3fc56f88cc98ef8139255cf8cd63eb2c586531e43310ff859d6bb3a6b51f1"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8316a77808c501004802f9beebde51c9f857054a0c871bd6da8280e718444449"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:f70b98cd94886b49d91170ef23ec5c0e8ebb6f242d734ed7ed677b24d50c82cf"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:bf6774e60d67a9efe02b3616fee22441d86fab4c6d335f9d2051d19d90a40063"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:e69924bfcdda39b722ef4d9aa762b2dd38e4632b3641b1d9a57ca9cd18f2f83a"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:6b181d8c23da913d4ff585afd1155a0e1194c0b50c54fcfe286f70cdaf2b7176"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:52509b5be062d9eafc8170e53026fbc54cf3b32759a23d07fd935fb04fc22d95"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-win32.whl", hash = "sha256:27c523fbfbdfd19c6867af7346332b62b586eed663887392cff78d614f9ec313"},
|
||||
{file = "multidict-6.0.4-cp39-cp39-win_amd64.whl", hash = "sha256:33029f5734336aa0d4c0384525da0387ef89148dc7191aae00ca5fb23d7aafc2"},
|
||||
{file = "multidict-6.0.4.tar.gz", hash = "sha256:3666906492efb76453c0e7b97f2cf459b0682e7402c0489a95484965dbc1da49"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mypy"
|
||||
version = "0.991"
|
||||
@@ -855,6 +1164,20 @@ dev = ["autoflake (>=1.3.1,<2.0.0)", "flake8 (>=3.8.3,<4.0.0)", "pre-commit (>=2
|
||||
doc = ["cairosvg (>=2.5.2,<3.0.0)", "mdx-include (>=1.4.1,<2.0.0)", "mkdocs (>=1.1.2,<2.0.0)", "mkdocs-material (>=8.1.4,<9.0.0)", "pillow (>=9.3.0,<10.0.0)"]
|
||||
test = ["black (>=22.3.0,<23.0.0)", "coverage (>=6.2,<7.0)", "isort (>=5.0.6,<6.0.0)", "mypy (==0.910)", "pytest (>=4.4.0,<8.0.0)", "pytest-cov (>=2.10.0,<5.0.0)", "pytest-sugar (>=0.9.4,<0.10.0)", "pytest-xdist (>=1.32.0,<4.0.0)", "rich (>=10.11.0,<14.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "types-requests"
|
||||
version = "2.31.0.20240106"
|
||||
description = "Typing stubs for requests"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "types-requests-2.31.0.20240106.tar.gz", hash = "sha256:0e1c731c17f33618ec58e022b614a1a2ecc25f7dc86800b36ef341380402c612"},
|
||||
{file = "types_requests-2.31.0.20240106-py3-none-any.whl", hash = "sha256:da997b3b6a72cc08d09f4dba9802fdbabc89104b35fe24ee588e674037689354"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
urllib3 = ">=2"
|
||||
|
||||
[[package]]
|
||||
name = "typing-extensions"
|
||||
version = "4.9.0"
|
||||
@@ -921,7 +1244,110 @@ files = [
|
||||
[package.extras]
|
||||
watchmedo = ["PyYAML (>=3.10)"]
|
||||
|
||||
[[package]]
|
||||
name = "yarl"
|
||||
version = "1.9.4"
|
||||
description = "Yet another URL library"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "yarl-1.9.4-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:a8c1df72eb746f4136fe9a2e72b0c9dc1da1cbd23b5372f94b5820ff8ae30e0e"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a3a6ed1d525bfb91b3fc9b690c5a21bb52de28c018530ad85093cc488bee2dd2"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c38c9ddb6103ceae4e4498f9c08fac9b590c5c71b0370f98714768e22ac6fa66"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d9e09c9d74f4566e905a0b8fa668c58109f7624db96a2171f21747abc7524234"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b8477c1ee4bd47c57d49621a062121c3023609f7a13b8a46953eb6c9716ca392"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:d5ff2c858f5f6a42c2a8e751100f237c5e869cbde669a724f2062d4c4ef93551"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:357495293086c5b6d34ca9616a43d329317feab7917518bc97a08f9e55648455"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54525ae423d7b7a8ee81ba189f131054defdb122cde31ff17477951464c1691c"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:801e9264d19643548651b9db361ce3287176671fb0117f96b5ac0ee1c3530d53"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e516dc8baf7b380e6c1c26792610230f37147bb754d6426462ab115a02944385"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-musllinux_1_1_ppc64le.whl", hash = "sha256:7d5aaac37d19b2904bb9dfe12cdb08c8443e7ba7d2852894ad448d4b8f442863"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-musllinux_1_1_s390x.whl", hash = "sha256:54beabb809ffcacbd9d28ac57b0db46e42a6e341a030293fb3185c409e626b8b"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:bac8d525a8dbc2a1507ec731d2867025d11ceadcb4dd421423a5d42c56818541"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-win32.whl", hash = "sha256:7855426dfbddac81896b6e533ebefc0af2f132d4a47340cee6d22cac7190022d"},
|
||||
{file = "yarl-1.9.4-cp310-cp310-win_amd64.whl", hash = "sha256:848cd2a1df56ddbffeb375535fb62c9d1645dde33ca4d51341378b3f5954429b"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:35a2b9396879ce32754bd457d31a51ff0a9d426fd9e0e3c33394bf4b9036b099"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4c7d56b293cc071e82532f70adcbd8b61909eec973ae9d2d1f9b233f3d943f2c"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d8a1c6c0be645c745a081c192e747c5de06e944a0d21245f4cf7c05e457c36e0"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4b3c1ffe10069f655ea2d731808e76e0f452fc6c749bea04781daf18e6039525"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:549d19c84c55d11687ddbd47eeb348a89df9cb30e1993f1b128f4685cd0ebbf8"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a7409f968456111140c1c95301cadf071bd30a81cbd7ab829169fb9e3d72eae9"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e23a6d84d9d1738dbc6e38167776107e63307dfc8ad108e580548d1f2c587f42"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d8b889777de69897406c9fb0b76cdf2fd0f31267861ae7501d93003d55f54fbe"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:03caa9507d3d3c83bca08650678e25364e1843b484f19986a527630ca376ecce"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:4e9035df8d0880b2f1c7f5031f33f69e071dfe72ee9310cfc76f7b605958ceb9"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-musllinux_1_1_ppc64le.whl", hash = "sha256:c0ec0ed476f77db9fb29bca17f0a8fcc7bc97ad4c6c1d8959c507decb22e8572"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-musllinux_1_1_s390x.whl", hash = "sha256:ee04010f26d5102399bd17f8df8bc38dc7ccd7701dc77f4a68c5b8d733406958"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:49a180c2e0743d5d6e0b4d1a9e5f633c62eca3f8a86ba5dd3c471060e352ca98"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-win32.whl", hash = "sha256:81eb57278deb6098a5b62e88ad8281b2ba09f2f1147c4767522353eaa6260b31"},
|
||||
{file = "yarl-1.9.4-cp311-cp311-win_amd64.whl", hash = "sha256:d1d2532b340b692880261c15aee4dc94dd22ca5d61b9db9a8a361953d36410b1"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:0d2454f0aef65ea81037759be5ca9947539667eecebca092733b2eb43c965a81"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:44d8ffbb9c06e5a7f529f38f53eda23e50d1ed33c6c869e01481d3fafa6b8142"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:aaaea1e536f98754a6e5c56091baa1b6ce2f2700cc4a00b0d49eca8dea471074"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3777ce5536d17989c91696db1d459574e9a9bd37660ea7ee4d3344579bb6f129"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9fc5fc1eeb029757349ad26bbc5880557389a03fa6ada41703db5e068881e5f2"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ea65804b5dc88dacd4a40279af0cdadcfe74b3e5b4c897aa0d81cf86927fee78"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa102d6d280a5455ad6a0f9e6d769989638718e938a6a0a2ff3f4a7ff8c62cc4"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:09efe4615ada057ba2d30df871d2f668af661e971dfeedf0c159927d48bbeff0"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:008d3e808d03ef28542372d01057fd09168419cdc8f848efe2804f894ae03e51"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:6f5cb257bc2ec58f437da2b37a8cd48f666db96d47b8a3115c29f316313654ff"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:992f18e0ea248ee03b5a6e8b3b4738850ae7dbb172cc41c966462801cbf62cf7"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-musllinux_1_1_s390x.whl", hash = "sha256:0e9d124c191d5b881060a9e5060627694c3bdd1fe24c5eecc8d5d7d0eb6faabc"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:3986b6f41ad22988e53d5778f91855dc0399b043fc8946d4f2e68af22ee9ff10"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-win32.whl", hash = "sha256:4b21516d181cd77ebd06ce160ef8cc2a5e9ad35fb1c5930882baff5ac865eee7"},
|
||||
{file = "yarl-1.9.4-cp312-cp312-win_amd64.whl", hash = "sha256:a9bd00dc3bc395a662900f33f74feb3e757429e545d831eef5bb280252631984"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:63b20738b5aac74e239622d2fe30df4fca4942a86e31bf47a81a0e94c14df94f"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7d7f7de27b8944f1fee2c26a88b4dabc2409d2fea7a9ed3df79b67277644e17"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c74018551e31269d56fab81a728f683667e7c28c04e807ba08f8c9e3bba32f14"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:ca06675212f94e7a610e85ca36948bb8fc023e458dd6c63ef71abfd482481aa5"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5aef935237d60a51a62b86249839b51345f47564208c6ee615ed2a40878dccdd"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2b134fd795e2322b7684155b7855cc99409d10b2e408056db2b93b51a52accc7"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:d25039a474c4c72a5ad4b52495056f843a7ff07b632c1b92ea9043a3d9950f6e"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:f7d6b36dd2e029b6bcb8a13cf19664c7b8e19ab3a58e0fefbb5b8461447ed5ec"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-musllinux_1_1_ppc64le.whl", hash = "sha256:957b4774373cf6f709359e5c8c4a0af9f6d7875db657adb0feaf8d6cb3c3964c"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-musllinux_1_1_s390x.whl", hash = "sha256:d7eeb6d22331e2fd42fce928a81c697c9ee2d51400bd1a28803965883e13cead"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:6a962e04b8f91f8c4e5917e518d17958e3bdee71fd1d8b88cdce74dd0ebbf434"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-win32.whl", hash = "sha256:f3bc6af6e2b8f92eced34ef6a96ffb248e863af20ef4fde9448cc8c9b858b749"},
|
||||
{file = "yarl-1.9.4-cp37-cp37m-win_amd64.whl", hash = "sha256:ad4d7a90a92e528aadf4965d685c17dacff3df282db1121136c382dc0b6014d2"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:ec61d826d80fc293ed46c9dd26995921e3a82146feacd952ef0757236fc137be"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8be9e837ea9113676e5754b43b940b50cce76d9ed7d2461df1af39a8ee674d9f"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:bef596fdaa8f26e3d66af846bbe77057237cb6e8efff8cd7cc8dff9a62278bbf"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2d47552b6e52c3319fede1b60b3de120fe83bde9b7bddad11a69fb0af7db32f1"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:84fc30f71689d7fc9168b92788abc977dc8cefa806909565fc2951d02f6b7d57"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4aa9741085f635934f3a2583e16fcf62ba835719a8b2b28fb2917bb0537c1dfa"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:206a55215e6d05dbc6c98ce598a59e6fbd0c493e2de4ea6cc2f4934d5a18d130"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:07574b007ee20e5c375a8fe4a0789fad26db905f9813be0f9fef5a68080de559"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:5a2e2433eb9344a163aced6a5f6c9222c0786e5a9e9cac2c89f0b28433f56e23"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:6ad6d10ed9b67a382b45f29ea028f92d25bc0bc1daf6c5b801b90b5aa70fb9ec"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-musllinux_1_1_ppc64le.whl", hash = "sha256:6fe79f998a4052d79e1c30eeb7d6c1c1056ad33300f682465e1b4e9b5a188b78"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-musllinux_1_1_s390x.whl", hash = "sha256:a825ec844298c791fd28ed14ed1bffc56a98d15b8c58a20e0e08c1f5f2bea1be"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:8619d6915b3b0b34420cf9b2bb6d81ef59d984cb0fde7544e9ece32b4b3043c3"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-win32.whl", hash = "sha256:686a0c2f85f83463272ddffd4deb5e591c98aac1897d65e92319f729c320eece"},
|
||||
{file = "yarl-1.9.4-cp38-cp38-win_amd64.whl", hash = "sha256:a00862fb23195b6b8322f7d781b0dc1d82cb3bcac346d1e38689370cc1cc398b"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:604f31d97fa493083ea21bd9b92c419012531c4e17ea6da0f65cacdcf5d0bd27"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8a854227cf581330ffa2c4824d96e52ee621dd571078a252c25e3a3b3d94a1b1"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ba6f52cbc7809cd8d74604cce9c14868306ae4aa0282016b641c661f981a6e91"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a6327976c7c2f4ee6816eff196e25385ccc02cb81427952414a64811037bbc8b"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8397a3817d7dcdd14bb266283cd1d6fc7264a48c186b986f32e86d86d35fbac5"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e0381b4ce23ff92f8170080c97678040fc5b08da85e9e292292aba67fdac6c34"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:23d32a2594cb5d565d358a92e151315d1b2268bc10f4610d098f96b147370136"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ddb2a5c08a4eaaba605340fdee8fc08e406c56617566d9643ad8bf6852778fc7"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:26a1dc6285e03f3cc9e839a2da83bcbf31dcb0d004c72d0730e755b33466c30e"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:18580f672e44ce1238b82f7fb87d727c4a131f3a9d33a5e0e82b793362bf18b4"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-musllinux_1_1_ppc64le.whl", hash = "sha256:29e0f83f37610f173eb7e7b5562dd71467993495e568e708d99e9d1944f561ec"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-musllinux_1_1_s390x.whl", hash = "sha256:1f23e4fe1e8794f74b6027d7cf19dc25f8b63af1483d91d595d4a07eca1fb26c"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:db8e58b9d79200c76956cefd14d5c90af54416ff5353c5bfd7cbe58818e26ef0"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-win32.whl", hash = "sha256:c7224cab95645c7ab53791022ae77a4509472613e839dab722a72abe5a684575"},
|
||||
{file = "yarl-1.9.4-cp39-cp39-win_amd64.whl", hash = "sha256:824d6c50492add5da9374875ce72db7a0733b29c2394890aef23d533106e2b15"},
|
||||
{file = "yarl-1.9.4-py3-none-any.whl", hash = "sha256:928cecb0ef9d5a7946eb6ff58417ad2fe9375762382f1bf5c55e61645f2c43ad"},
|
||||
{file = "yarl-1.9.4.tar.gz", hash = "sha256:566db86717cf8080b99b58b083b773a908ae40f06681e87e589a976faf8246bf"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
idna = ">=2.0"
|
||||
multidict = ">=4.0"
|
||||
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = ">=3.8.1,<4.0"
|
||||
content-hash = "a60515abec186308d22ecd037b22dbbb353b1e5e02bb2c0d3d69cae038c520e6"
|
||||
content-hash = "c5961892ffafa51111ae7e15ccae510363d066406706f85648b9e02dc794cf45"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "langchain-together"
|
||||
version = "0.0.1"
|
||||
version = "0.0.2"
|
||||
description = "An integration package connecting Together and LangChain"
|
||||
authors = []
|
||||
readme = "README.md"
|
||||
@@ -9,6 +9,8 @@ readme = "README.md"
|
||||
python = ">=3.8.1,<4.0"
|
||||
langchain-core = ">=0.0.12"
|
||||
together = "^0.2.10"
|
||||
requests = "^2"
|
||||
aiohttp = "^3.9.1"
|
||||
|
||||
[tool.poetry.group.test]
|
||||
optional = true
|
||||
@@ -42,6 +44,7 @@ ruff = "^0.1.5"
|
||||
[tool.poetry.group.typing.dependencies]
|
||||
mypy = "^0.991"
|
||||
langchain-core = {path = "../../core", develop = true}
|
||||
types-requests = "^2"
|
||||
|
||||
[tool.poetry.group.dev]
|
||||
optional = true
|
||||
|
||||
40
libs/partners/together/tests/integration_tests/test_llms.py
Normal file
40
libs/partners/together/tests/integration_tests/test_llms.py
Normal file
@@ -0,0 +1,40 @@
|
||||
"""Test Together API wrapper.
|
||||
|
||||
In order to run this test, you need to have an Together api key.
|
||||
You can get it by registering for free at https://api.together.xyz/.
|
||||
A test key can be found at https://api.together.xyz/settings/api-keys
|
||||
|
||||
You'll then need to set TOGETHER_API_KEY environment variable to your api key.
|
||||
"""
|
||||
import pytest as pytest
|
||||
|
||||
from langchain_together import Together
|
||||
|
||||
|
||||
def test_together_call() -> None:
|
||||
"""Test simple call to together."""
|
||||
llm = Together(
|
||||
model="togethercomputer/RedPajama-INCITE-7B-Base",
|
||||
temperature=0.2,
|
||||
max_tokens=250,
|
||||
)
|
||||
output = llm.invoke("Say foo:")
|
||||
|
||||
assert llm._llm_type == "together"
|
||||
assert isinstance(output, str)
|
||||
assert len(output) > 0
|
||||
|
||||
|
||||
async def test_together_acall() -> None:
|
||||
"""Test simple call to together."""
|
||||
llm = Together(
|
||||
model="togethercomputer/RedPajama-INCITE-7B-Base",
|
||||
temperature=0.2,
|
||||
max_tokens=250,
|
||||
)
|
||||
output = await llm.agenerate(["Say foo:"], stop=["bar"])
|
||||
|
||||
assert llm._llm_type == "together"
|
||||
output_text = output.generations[0][0].text
|
||||
assert isinstance(output_text, str)
|
||||
assert output_text.count("bar") <= 1
|
||||
@@ -1,6 +1,8 @@
|
||||
from langchain_together import __all__
|
||||
|
||||
EXPECTED_ALL = [
|
||||
"__version__",
|
||||
"Together",
|
||||
"TogetherEmbeddings",
|
||||
]
|
||||
|
||||
|
||||
61
libs/partners/together/tests/unit_tests/test_llms.py
Normal file
61
libs/partners/together/tests/unit_tests/test_llms.py
Normal file
@@ -0,0 +1,61 @@
|
||||
"""Test Together LLM"""
|
||||
from typing import cast
|
||||
|
||||
from langchain_core.pydantic_v1 import SecretStr
|
||||
from pytest import CaptureFixture, MonkeyPatch
|
||||
|
||||
from langchain_together import Together
|
||||
|
||||
|
||||
def test_together_api_key_is_secret_string() -> None:
|
||||
"""Test that the API key is stored as a SecretStr."""
|
||||
llm = Together(
|
||||
together_api_key="secret-api-key",
|
||||
model="togethercomputer/RedPajama-INCITE-7B-Base",
|
||||
temperature=0.2,
|
||||
max_tokens=250,
|
||||
)
|
||||
assert isinstance(llm.together_api_key, SecretStr)
|
||||
|
||||
|
||||
def test_together_api_key_masked_when_passed_from_env(
|
||||
monkeypatch: MonkeyPatch, capsys: CaptureFixture
|
||||
) -> None:
|
||||
"""Test that the API key is masked when passed from an environment variable."""
|
||||
monkeypatch.setenv("TOGETHER_API_KEY", "secret-api-key")
|
||||
llm = Together(
|
||||
model="togethercomputer/RedPajama-INCITE-7B-Base",
|
||||
temperature=0.2,
|
||||
max_tokens=250,
|
||||
)
|
||||
print(llm.together_api_key, end="")
|
||||
captured = capsys.readouterr()
|
||||
|
||||
assert captured.out == "**********"
|
||||
|
||||
|
||||
def test_together_api_key_masked_when_passed_via_constructor(
|
||||
capsys: CaptureFixture,
|
||||
) -> None:
|
||||
"""Test that the API key is masked when passed via the constructor."""
|
||||
llm = Together(
|
||||
together_api_key="secret-api-key",
|
||||
model="togethercomputer/RedPajama-INCITE-7B-Base",
|
||||
temperature=0.2,
|
||||
max_tokens=250,
|
||||
)
|
||||
print(llm.together_api_key, end="")
|
||||
captured = capsys.readouterr()
|
||||
|
||||
assert captured.out == "**********"
|
||||
|
||||
|
||||
def test_together_uses_actual_secret_value_from_secretstr() -> None:
|
||||
"""Test that the actual secret value is correctly retrieved."""
|
||||
llm = Together(
|
||||
together_api_key="secret-api-key",
|
||||
model="togethercomputer/RedPajama-INCITE-7B-Base",
|
||||
temperature=0.2,
|
||||
max_tokens=250,
|
||||
)
|
||||
assert cast(SecretStr, llm.together_api_key).get_secret_value() == "secret-api-key"
|
||||
92
templates/neo4j-semantic-layer/README.md
Normal file
92
templates/neo4j-semantic-layer/README.md
Normal file
@@ -0,0 +1,92 @@
|
||||
# neo4j-semantic-layer
|
||||
|
||||
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using OpenAI function calling.
|
||||
The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph databas based on the user's intent.
|
||||
|
||||

|
||||
|
||||
## Tools
|
||||
|
||||
The agent utilizes several tools to interact with the Neo4j graph database effectively:
|
||||
|
||||
1. **Information tool**:
|
||||
- Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
|
||||
2. **Recommendation Tool**:
|
||||
- Provides movie recommendations based upon user preferences and input.
|
||||
3. **Memory Tool**:
|
||||
- Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.
|
||||
|
||||
## Environment Setup
|
||||
|
||||
You need to define the following environment variables
|
||||
|
||||
```
|
||||
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
|
||||
NEO4J_URI=<YOUR_NEO4J_URI>
|
||||
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
|
||||
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
|
||||
```
|
||||
|
||||
## Populating with data
|
||||
|
||||
If you want to populate the DB with an example movie dataset, you can run `python ingest.py`.
|
||||
The script import information about movies and their rating by users.
|
||||
Additionally, the script creates two [fulltext indices](https://neo4j.com/docs/cypher-manual/current/indexes-for-full-text-search/), which are used to map information from user input to the database.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U "langchain-cli[serve]"
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package neo4j-semantic-layer
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add neo4j-semantic-layer
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from neo4j_semantic_layer import agent_executor as neo4j_semantic_agent
|
||||
|
||||
add_routes(app, neo4j_semantic_agent, path="/neo4j-semantic-layer")
|
||||
```
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/neo4j-semantic-layer/playground](http://127.0.0.1:8000/neo4j-semantic-layer/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/neo4j-semantic-layer")
|
||||
```
|
||||
59
templates/neo4j-semantic-layer/ingest.py
Normal file
59
templates/neo4j-semantic-layer/ingest.py
Normal file
@@ -0,0 +1,59 @@
|
||||
from langchain_community.graphs import Neo4jGraph
|
||||
|
||||
# Instantiate connection to Neo4j
|
||||
graph = Neo4jGraph()
|
||||
|
||||
# Define unique constraints
|
||||
graph.query("CREATE CONSTRAINT IF NOT EXISTS FOR (m:Movie) REQUIRE m.id IS UNIQUE;")
|
||||
graph.query("CREATE CONSTRAINT IF NOT EXISTS FOR (u:User) REQUIRE u.id IS UNIQUE;")
|
||||
graph.query("CREATE CONSTRAINT IF NOT EXISTS FOR (p:Person) REQUIRE p.name IS UNIQUE;")
|
||||
graph.query("CREATE CONSTRAINT IF NOT EXISTS FOR (g:Genre) REQUIRE g.name IS UNIQUE;")
|
||||
|
||||
# Import movie information
|
||||
|
||||
movies_query = """
|
||||
LOAD CSV WITH HEADERS FROM
|
||||
'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies.csv'
|
||||
AS row
|
||||
CALL {
|
||||
WITH row
|
||||
MERGE (m:Movie {id:row.movieId})
|
||||
SET m.released = date(row.released),
|
||||
m.title = row.title,
|
||||
m.imdbRating = toFloat(row.imdbRating)
|
||||
FOREACH (director in split(row.director, '|') |
|
||||
MERGE (p:Person {name:trim(director)})
|
||||
MERGE (p)-[:DIRECTED]->(m))
|
||||
FOREACH (actor in split(row.actors, '|') |
|
||||
MERGE (p:Person {name:trim(actor)})
|
||||
MERGE (p)-[:ACTED_IN]->(m))
|
||||
FOREACH (genre in split(row.genres, '|') |
|
||||
MERGE (g:Genre {name:trim(genre)})
|
||||
MERGE (m)-[:IN_GENRE]->(g))
|
||||
} IN TRANSACTIONS
|
||||
"""
|
||||
|
||||
graph.query(movies_query)
|
||||
|
||||
# Import rating information
|
||||
rating_query = """
|
||||
LOAD CSV WITH HEADERS FROM
|
||||
'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/ratings.csv'
|
||||
AS row
|
||||
CALL {
|
||||
WITH row
|
||||
MATCH (m:Movie {id:row.movieId})
|
||||
MERGE (u:User {id:row.userId})
|
||||
MERGE (u)-[r:RATED]->(m)
|
||||
SET r.rating = toFloat(row.rating),
|
||||
r.timestamp = row.timestamp
|
||||
} IN TRANSACTIONS OF 10000 ROWS
|
||||
"""
|
||||
|
||||
graph.query(rating_query)
|
||||
|
||||
# Define fulltext indices
|
||||
graph.query("CREATE FULLTEXT INDEX movie IF NOT EXISTS FOR (m:Movie) ON EACH [m.title]")
|
||||
graph.query(
|
||||
"CREATE FULLTEXT INDEX person IF NOT EXISTS FOR (p:Person) ON EACH [p.name]"
|
||||
)
|
||||
17
templates/neo4j-semantic-layer/main.py
Normal file
17
templates/neo4j-semantic-layer/main.py
Normal file
@@ -0,0 +1,17 @@
|
||||
from neo4j_semantic_layer import agent_executor
|
||||
|
||||
if __name__ == "__main__":
|
||||
original_query = "What do you know about person John?"
|
||||
followup_query = "John Travolta"
|
||||
chat_history = [
|
||||
(
|
||||
"What do you know about person John?",
|
||||
"I found multiple people named John. Could you please specify "
|
||||
"which one you are interested in? Here are some options:"
|
||||
"\n\n1. John Travolta\n2. John McDonough",
|
||||
)
|
||||
]
|
||||
print(agent_executor.invoke({"input": original_query}))
|
||||
print(
|
||||
agent_executor.invoke({"input": followup_query, "chat_history": chat_history})
|
||||
)
|
||||
@@ -0,0 +1,3 @@
|
||||
from neo4j_semantic_layer.agent import agent_executor
|
||||
|
||||
__all__ = ["agent_executor"]
|
||||
71
templates/neo4j-semantic-layer/neo4j_semantic_layer/agent.py
Normal file
71
templates/neo4j-semantic-layer/neo4j_semantic_layer/agent.py
Normal file
@@ -0,0 +1,71 @@
|
||||
from typing import List, Tuple
|
||||
|
||||
from langchain.agents import AgentExecutor
|
||||
from langchain.agents.format_scratchpad import format_to_openai_function_messages
|
||||
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
|
||||
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
|
||||
from langchain.pydantic_v1 import BaseModel, Field
|
||||
from langchain.schema import AIMessage, HumanMessage
|
||||
from langchain.tools.render import format_tool_to_openai_function
|
||||
from langchain_community.chat_models import ChatOpenAI
|
||||
|
||||
from neo4j_semantic_layer.information_tool import InformationTool
|
||||
from neo4j_semantic_layer.memory_tool import MemoryTool
|
||||
from neo4j_semantic_layer.recommendation_tool import RecommenderTool
|
||||
|
||||
llm = ChatOpenAI(temperature=0, model="gpt-4")
|
||||
tools = [InformationTool(), RecommenderTool(), MemoryTool()]
|
||||
|
||||
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
|
||||
|
||||
prompt = ChatPromptTemplate.from_messages(
|
||||
[
|
||||
(
|
||||
"system",
|
||||
"You are a helpful assistant that finds information about movies "
|
||||
" and recommends them. If tools require follow up questions, "
|
||||
"make sure to ask the user for clarification. Make sure to include any "
|
||||
"available options that need to be clarified in the follow up questions",
|
||||
),
|
||||
MessagesPlaceholder(variable_name="chat_history"),
|
||||
("user", "{input}"),
|
||||
MessagesPlaceholder(variable_name="agent_scratchpad"),
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def _format_chat_history(chat_history: List[Tuple[str, str]]):
|
||||
buffer = []
|
||||
for human, ai in chat_history:
|
||||
buffer.append(HumanMessage(content=human))
|
||||
buffer.append(AIMessage(content=ai))
|
||||
return buffer
|
||||
|
||||
|
||||
agent = (
|
||||
{
|
||||
"input": lambda x: x["input"],
|
||||
"chat_history": lambda x: _format_chat_history(x["chat_history"])
|
||||
if x.get("chat_history")
|
||||
else [],
|
||||
"agent_scratchpad": lambda x: format_to_openai_function_messages(
|
||||
x["intermediate_steps"]
|
||||
),
|
||||
}
|
||||
| prompt
|
||||
| llm_with_tools
|
||||
| OpenAIFunctionsAgentOutputParser()
|
||||
)
|
||||
|
||||
|
||||
# Add typing for input
|
||||
class AgentInput(BaseModel):
|
||||
input: str
|
||||
chat_history: List[Tuple[str, str]] = Field(
|
||||
..., extra={"widget": {"type": "chat", "input": "input", "output": "output"}}
|
||||
)
|
||||
|
||||
|
||||
agent_executor = AgentExecutor(agent=agent, tools=tools).with_types(
|
||||
input_type=AgentInput
|
||||
)
|
||||
@@ -0,0 +1,74 @@
|
||||
from typing import Optional, Type
|
||||
|
||||
from langchain.callbacks.manager import (
|
||||
AsyncCallbackManagerForToolRun,
|
||||
CallbackManagerForToolRun,
|
||||
)
|
||||
|
||||
# Import things that are needed generically
|
||||
from langchain.pydantic_v1 import BaseModel, Field
|
||||
from langchain.tools import BaseTool
|
||||
|
||||
from neo4j_semantic_layer.utils import get_candidates, graph
|
||||
|
||||
description_query = """
|
||||
MATCH (m:Movie|Person)
|
||||
WHERE m.title = $candidate OR m.name = $candidate
|
||||
MATCH (m)-[r:ACTED_IN|DIRECTED|HAS_GENRE]-(t)
|
||||
WITH m, type(r) as type, collect(coalesce(t.name, t.title)) as names
|
||||
WITH m, type+": "+reduce(s="", n IN names | s + n + ", ") as types
|
||||
WITH m, collect(types) as contexts
|
||||
WITH m, "type:" + labels(m)[0] + "\ntitle: "+ coalesce(m.title, m.name)
|
||||
+ "\nyear: "+coalesce(m.released,"") +"\n" +
|
||||
reduce(s="", c in contexts | s + substring(c, 0, size(c)-2) +"\n") as context
|
||||
RETURN context LIMIT 1
|
||||
"""
|
||||
|
||||
|
||||
def get_information(entity: str, type: str) -> str:
|
||||
candidates = get_candidates(entity, type)
|
||||
if not candidates:
|
||||
return "No information was found about the movie or person in the database"
|
||||
elif len(candidates) > 1:
|
||||
newline = "\n"
|
||||
return (
|
||||
"Need additional information, which of these "
|
||||
f"did you mean: {newline + newline.join(str(d) for d in candidates)}"
|
||||
)
|
||||
data = graph.query(
|
||||
description_query, params={"candidate": candidates[0]["candidate"]}
|
||||
)
|
||||
return data[0]["context"]
|
||||
|
||||
|
||||
class InformationInput(BaseModel):
|
||||
entity: str = Field(description="movie or a person mentioned in the question")
|
||||
entity_type: str = Field(
|
||||
description="type of the entity. Available options are 'movie' or 'person'"
|
||||
)
|
||||
|
||||
|
||||
class InformationTool(BaseTool):
|
||||
name = "Information"
|
||||
description = (
|
||||
"useful for when you need to answer questions about various actors or movies"
|
||||
)
|
||||
args_schema: Type[BaseModel] = InformationInput
|
||||
|
||||
def _run(
|
||||
self,
|
||||
entity: str,
|
||||
entity_type: str,
|
||||
run_manager: Optional[CallbackManagerForToolRun] = None,
|
||||
) -> str:
|
||||
"""Use the tool."""
|
||||
return get_information(entity, entity_type)
|
||||
|
||||
async def _arun(
|
||||
self,
|
||||
entity: str,
|
||||
entity_type: str,
|
||||
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
|
||||
) -> str:
|
||||
"""Use the tool asynchronously."""
|
||||
return get_information(entity, entity_type)
|
||||
@@ -0,0 +1,72 @@
|
||||
from typing import Optional, Type
|
||||
|
||||
from langchain.callbacks.manager import (
|
||||
AsyncCallbackManagerForToolRun,
|
||||
CallbackManagerForToolRun,
|
||||
)
|
||||
|
||||
# Import things that are needed generically
|
||||
from langchain.pydantic_v1 import BaseModel, Field
|
||||
from langchain.tools import BaseTool
|
||||
|
||||
from neo4j_semantic_layer.utils import get_candidates, get_user_id, graph
|
||||
|
||||
store_rating_query = """
|
||||
MERGE (u:User {userId:$user_id})
|
||||
WITH u
|
||||
UNWIND $candidates as row
|
||||
MATCH (m:Movie {title: row.candidate})
|
||||
MERGE (u)-[r:RATED]->(m)
|
||||
SET r.rating = toFloat($rating)
|
||||
RETURN distinct 'Noted' AS response
|
||||
"""
|
||||
|
||||
|
||||
def store_movie_rating(movie: str, rating: int):
|
||||
user_id = get_user_id()
|
||||
candidates = get_candidates(movie, "movie")
|
||||
if not candidates:
|
||||
return "This movie is not in our database"
|
||||
response = graph.query(
|
||||
store_rating_query,
|
||||
params={"user_id": user_id, "candidates": candidates, "rating": rating},
|
||||
)
|
||||
try:
|
||||
return response[0]["response"]
|
||||
except Exception as e:
|
||||
print(e)
|
||||
return "Something went wrong"
|
||||
|
||||
|
||||
class MemoryInput(BaseModel):
|
||||
movie: str = Field(description="movie the user liked")
|
||||
rating: int = Field(
|
||||
description=(
|
||||
"Rating from 1 to 5, where one represents heavy dislike "
|
||||
"and 5 represent the user loved the movie"
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
class MemoryTool(BaseTool):
|
||||
name = "Memory"
|
||||
description = "useful for memorizing which movies the user liked"
|
||||
args_schema: Type[BaseModel] = MemoryInput
|
||||
|
||||
def _run(
|
||||
self,
|
||||
movie: str,
|
||||
rating: int,
|
||||
run_manager: Optional[CallbackManagerForToolRun] = None,
|
||||
) -> str:
|
||||
"""Use the tool."""
|
||||
return store_movie_rating(movie, rating)
|
||||
|
||||
async def _arun(
|
||||
self,
|
||||
movie: str,
|
||||
rating: int,
|
||||
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
|
||||
) -> str:
|
||||
"""Use the tool asynchronously."""
|
||||
return store_movie_rating(movie, rating)
|
||||
@@ -0,0 +1,143 @@
|
||||
from typing import Optional, Type
|
||||
|
||||
from langchain.callbacks.manager import (
|
||||
AsyncCallbackManagerForToolRun,
|
||||
CallbackManagerForToolRun,
|
||||
)
|
||||
from langchain.pydantic_v1 import BaseModel, Field
|
||||
from langchain.tools import BaseTool
|
||||
|
||||
from neo4j_semantic_layer.utils import get_candidates, get_user_id, graph
|
||||
|
||||
recommendation_query_db_history = """
|
||||
MERGE (u:User {userId:$user_id})
|
||||
WITH u
|
||||
// get recommendation candidates
|
||||
OPTIONAL MATCH (u)-[r1:RATED]->()<-[r2:RATED]-()-[r3:RATED]->(recommendation)
|
||||
WHERE r1.rating > 3.5 AND r2.rating > 3.5 AND r3.rating > 3.5
|
||||
AND NOT EXISTS {(u)-[:RATED]->(recommendation)}
|
||||
// rank and limit recommendations
|
||||
WITH u, recommendation, count(*) AS count
|
||||
ORDER BY count DESC LIMIT 3
|
||||
RETURN recommendation.title AS movie
|
||||
"""
|
||||
|
||||
recommendation_query_genre = """
|
||||
MATCH (m:Movie)-[:IN_GENRE]->(g:Genre {name:$genre})
|
||||
// filter out already seen movies by the user
|
||||
WHERE NOT EXISTS {
|
||||
(m)<-[:RATED]-(:User {userId:$user_id})
|
||||
}
|
||||
// rank and limit recommendations
|
||||
WITH m
|
||||
ORDER BY m.imdbRating DESC LIMIT 3
|
||||
RETURN m.title AS movie
|
||||
"""
|
||||
|
||||
|
||||
def recommendation_query_movie(genre: bool) -> str:
|
||||
return f"""
|
||||
MATCH (m1:Movie)<-[r1:RATED]-()-[r2:RATED]->(m2:Movie)
|
||||
WHERE r1.rating > 3.5 AND r2.rating > 3.5 and m1.title IN $movieTitles
|
||||
// filter out already seen movies by the user
|
||||
AND NOT EXISTS {{
|
||||
(m2)<-[:RATED]-(:User {{userId:$user_id}})
|
||||
}}
|
||||
{'AND EXISTS {(m2)-[:IN_GENRE]->(:Genre {name:$genre})}' if genre else ''}
|
||||
// rank and limit recommendations
|
||||
WITH m2, count(*) AS count
|
||||
ORDER BY count DESC LIMIT 3
|
||||
RETURN m2.title As movie
|
||||
"""
|
||||
|
||||
|
||||
def recommend_movie(movie: Optional[str] = None, genre: Optional[str] = None) -> str:
|
||||
"""
|
||||
Recommends movies based on user's history and preference
|
||||
for a specific movie and/or genre.
|
||||
Returns:
|
||||
str: A string containing a list of recommended movies, or an error message.
|
||||
"""
|
||||
user_id = get_user_id()
|
||||
params = {"user_id": user_id, "genre": genre}
|
||||
if not movie and not genre:
|
||||
# Try to recommend a movie based on the information in the db
|
||||
response = graph.query(recommendation_query_db_history, params)
|
||||
try:
|
||||
return ", ".join([el["movie"] for el in response])
|
||||
except Exception:
|
||||
return "Can you tell us about some of the movies you liked?"
|
||||
if not movie and genre:
|
||||
# Recommend top voted movies in the genre the user haven't seen before
|
||||
response = graph.query(recommendation_query_genre, params)
|
||||
try:
|
||||
return ", ".join([el["movie"] for el in response])
|
||||
except Exception:
|
||||
return "Something went wrong"
|
||||
|
||||
candidates = get_candidates(movie, "movie")
|
||||
if not candidates:
|
||||
return "The movie you mentioned wasn't found in the database"
|
||||
params["movieTitles"] = [el["candidate"] for el in candidates]
|
||||
query = recommendation_query_movie(bool(genre))
|
||||
response = graph.query(query, params)
|
||||
try:
|
||||
return ", ".join([el["movie"] for el in response])
|
||||
except Exception:
|
||||
return "Something went wrong"
|
||||
|
||||
|
||||
all_genres = [
|
||||
"Action",
|
||||
"Adventure",
|
||||
"Animation",
|
||||
"Children",
|
||||
"Comedy",
|
||||
"Crime",
|
||||
"Documentary",
|
||||
"Drama",
|
||||
"Fantasy",
|
||||
"Film-Noir",
|
||||
"Horror",
|
||||
"IMAX",
|
||||
"Musical",
|
||||
"Mystery",
|
||||
"Romance",
|
||||
"Sci-Fi",
|
||||
"Thriller",
|
||||
"War",
|
||||
"Western",
|
||||
]
|
||||
|
||||
|
||||
class RecommenderInput(BaseModel):
|
||||
movie: Optional[str] = Field(description="movie used for recommendation")
|
||||
genre: Optional[str] = Field(
|
||||
description=(
|
||||
"genre used for recommendation. Available options are:" f"{all_genres}"
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
class RecommenderTool(BaseTool):
|
||||
name = "Recommender"
|
||||
description = "useful for when you need to recommend a movie"
|
||||
args_schema: Type[BaseModel] = RecommenderInput
|
||||
|
||||
def _run(
|
||||
self,
|
||||
movie: Optional[str] = None,
|
||||
genre: Optional[str] = None,
|
||||
run_manager: Optional[CallbackManagerForToolRun] = None,
|
||||
) -> str:
|
||||
"""Use the tool."""
|
||||
return recommend_movie(movie, genre)
|
||||
|
||||
async def _arun(
|
||||
self,
|
||||
movie: Optional[str] = None,
|
||||
genre: Optional[str] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
|
||||
) -> str:
|
||||
"""Use the tool asynchronously."""
|
||||
return recommend_movie(movie, genre)
|
||||
84
templates/neo4j-semantic-layer/neo4j_semantic_layer/utils.py
Normal file
84
templates/neo4j-semantic-layer/neo4j_semantic_layer/utils.py
Normal file
@@ -0,0 +1,84 @@
|
||||
from typing import Dict, List
|
||||
|
||||
from langchain_community.graphs import Neo4jGraph
|
||||
|
||||
graph = Neo4jGraph()
|
||||
|
||||
|
||||
def get_user_id() -> int:
|
||||
"""
|
||||
Placeholder for a function that would normally retrieve
|
||||
a user's ID
|
||||
"""
|
||||
return 1
|
||||
|
||||
|
||||
def remove_lucene_chars(text: str) -> str:
|
||||
"""Remove Lucene special characters"""
|
||||
special_chars = [
|
||||
"+",
|
||||
"-",
|
||||
"&",
|
||||
"|",
|
||||
"!",
|
||||
"(",
|
||||
")",
|
||||
"{",
|
||||
"}",
|
||||
"[",
|
||||
"]",
|
||||
"^",
|
||||
'"',
|
||||
"~",
|
||||
"*",
|
||||
"?",
|
||||
":",
|
||||
"\\",
|
||||
]
|
||||
for char in special_chars:
|
||||
if char in text:
|
||||
text = text.replace(char, " ")
|
||||
return text.strip()
|
||||
|
||||
|
||||
def generate_full_text_query(input: str) -> str:
|
||||
"""
|
||||
Generate a full-text search query for a given input string.
|
||||
|
||||
This function constructs a query string suitable for a full-text search.
|
||||
It processes the input string by splitting it into words and appending a
|
||||
similarity threshold (~0.8) to each word, then combines them using the AND
|
||||
operator. Useful for mapping movies and people from user questions
|
||||
to database values, and allows for some misspelings.
|
||||
"""
|
||||
full_text_query = ""
|
||||
words = [el for el in remove_lucene_chars(input).split() if el]
|
||||
for word in words[:-1]:
|
||||
full_text_query += f" {word}~0.8 AND"
|
||||
full_text_query += f" {words[-1]}~0.8"
|
||||
return full_text_query.strip()
|
||||
|
||||
|
||||
candidate_query = """
|
||||
CALL db.index.fulltext.queryNodes($index, $fulltextQuery, {limit: $limit})
|
||||
YIELD node
|
||||
RETURN coalesce(node.name, node.title) AS candidate,
|
||||
[el in labels(node) WHERE el IN ['Person', 'Movie'] | el][0] AS label
|
||||
"""
|
||||
|
||||
|
||||
def get_candidates(input: str, type: str, limit: int = 3) -> List[Dict[str, str]]:
|
||||
"""
|
||||
Retrieve a list of candidate entities from database based on the input string.
|
||||
|
||||
This function queries the Neo4j database using a full-text search. It takes the
|
||||
input string, generates a full-text query, and executes this query against the
|
||||
specified index in the database. The function returns a list of candidates
|
||||
matching the query, with each candidate being a dictionary containing their name
|
||||
(or title) and label (either 'Person' or 'Movie').
|
||||
"""
|
||||
ft_query = generate_full_text_query(input)
|
||||
candidates = graph.query(
|
||||
candidate_query, {"fulltextQuery": ft_query, "index": type, "limit": limit}
|
||||
)
|
||||
return candidates
|
||||
1751
templates/neo4j-semantic-layer/poetry.lock
generated
Normal file
1751
templates/neo4j-semantic-layer/poetry.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
33
templates/neo4j-semantic-layer/pyproject.toml
Normal file
33
templates/neo4j-semantic-layer/pyproject.toml
Normal file
@@ -0,0 +1,33 @@
|
||||
[tool.poetry]
|
||||
name = "neo4j-semantic-layer"
|
||||
version = "0.1.0"
|
||||
description = "Build a semantic layer to allow an agent to interact with a graph database in consistent and robust way."
|
||||
authors = [
|
||||
"Tomaz Bratanic <tomaz.bratanic@neo4j.com>",
|
||||
]
|
||||
readme = "README.md"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
langchain = "^0.1"
|
||||
openai = "<2"
|
||||
neo4j = "^5.14.0"
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
langchain-cli = ">=0.0.20"
|
||||
|
||||
[tool.langserve]
|
||||
export_module = "neo4j_semantic_layer"
|
||||
export_attr = "agent_executor"
|
||||
|
||||
[tool.templates-hub]
|
||||
use-case = "semantic_layer"
|
||||
author = "Neo4j"
|
||||
integrations = ["Neo4j", "OpenAI"]
|
||||
tags = ["search", "graph-database", "function-calling"]
|
||||
|
||||
[build-system]
|
||||
requires = [
|
||||
"poetry-core",
|
||||
]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
BIN
templates/neo4j-semantic-layer/static/workflow.png
Normal file
BIN
templates/neo4j-semantic-layer/static/workflow.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 92 KiB |
0
templates/neo4j-semantic-layer/tests/__init__.py
Normal file
0
templates/neo4j-semantic-layer/tests/__init__.py
Normal file
21
templates/nvidia-rag-canonical/LICENSE
Normal file
21
templates/nvidia-rag-canonical/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 LangChain, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
121
templates/nvidia-rag-canonical/README.md
Normal file
121
templates/nvidia-rag-canonical/README.md
Normal file
@@ -0,0 +1,121 @@
|
||||
|
||||
# nvidia-rag-canonical
|
||||
|
||||
This template performs RAG using Milvus Vector Store and NVIDIA Models (Embedding and Chat).
|
||||
|
||||
## Environment Setup
|
||||
|
||||
You should export your NVIDIA API Key as an environment variable.
|
||||
If you do not have an NVIDIA API Key, you can create one by following these steps:
|
||||
1. Create a free account with the [NVIDIA GPU Cloud](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.
|
||||
2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.
|
||||
3. Select the `API` option and click `Generate Key`.
|
||||
4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.
|
||||
|
||||
```shell
|
||||
export NVIDIA_API_KEY=...
|
||||
```
|
||||
|
||||
For instructions on hosting the Milvus Vector Store, refer to the section at the bottom.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U langchain-cli
|
||||
```
|
||||
|
||||
To use the NVIDIA models, install the Langchain NVIDIA AI Endpoints package:
|
||||
```shell
|
||||
pip install -U langchain_nvidia_aiplay
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package nvidia-rag-canonical
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add nvidia-rag-canonical
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
```python
|
||||
from nvidia_rag_canonical import chain as rag_nvidia_chain
|
||||
|
||||
add_routes(app, rag_nvidia_chain, path="/nvidia-rag")
|
||||
```
|
||||
|
||||
If you want to set up an ingestion pipeline, you can add the following code to your `server.py` file:
|
||||
```python
|
||||
from rag_nvidia_canonical import ingest as rag_nvidia_ingest
|
||||
|
||||
add_routes(app, rag_nvidia_ingest, path="/nvidia-rag-ingest")
|
||||
```
|
||||
Note that for files ingested by the ingestion API, the server will need to be restarted for the newly ingested files to be accessible by the retriever.
|
||||
|
||||
(Optional) Let's now configure LangSmith.
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
If you DO NOT already have a Milvus Vector Store you want to connect to, see `Milvus Setup` section below before proceeding.
|
||||
|
||||
If you DO have a Milvus Vector Store you want to connect to, edit the connection details in `nvidia_rag_canonical/chain.py`
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/nvidia-rag/playground](http://127.0.0.1:8000/nvidia-rag/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/nvidia-rag")
|
||||
```
|
||||
|
||||
|
||||
## Milvus Setup
|
||||
|
||||
Use this step if you need to create a Milvus Vector Store and ingest data.
|
||||
We will first follow the standard Milvus setup instructions [here](https://milvus.io/docs/install_standalone-docker.md).
|
||||
|
||||
1. Download the Docker Compose YAML file.
|
||||
```shell
|
||||
wget https://github.com/milvus-io/milvus/releases/download/v2.3.3/milvus-standalone-docker-compose.yml -O docker-compose.yml
|
||||
```
|
||||
2. Start the Milvus Vector Store container
|
||||
```shell
|
||||
sudo docker compose up -d
|
||||
```
|
||||
3. Install the PyMilvus package to interact with the Milvus container.
|
||||
```shell
|
||||
pip install pymilvus
|
||||
```
|
||||
4. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:
|
||||
|
||||
```shell
|
||||
python ingest.py
|
||||
```
|
||||
|
||||
Note that you can (and should!) change this to ingest data of your choice.
|
||||
39
templates/nvidia-rag-canonical/ingest.py
Normal file
39
templates/nvidia-rag-canonical/ingest.py
Normal file
@@ -0,0 +1,39 @@
|
||||
import getpass
|
||||
import os
|
||||
|
||||
from langchain.document_loaders import PyPDFLoader
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
from langchain.vectorstores.milvus import Milvus
|
||||
from langchain_nvidia_aiplay import NVIDIAEmbeddings
|
||||
|
||||
if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
|
||||
print("Valid NVIDIA_API_KEY already in environment. Delete to reset")
|
||||
else:
|
||||
nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ")
|
||||
assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key"
|
||||
os.environ["NVIDIA_API_KEY"] = nvapi_key
|
||||
|
||||
# Note: if you change this, you should also change it in `nvidia_rag_canonical/chain.py`
|
||||
EMBEDDING_MODEL = "nvolveqa_40k"
|
||||
HOST = "127.0.0.1"
|
||||
PORT = "19530"
|
||||
COLLECTION_NAME = "test"
|
||||
|
||||
embeddings = NVIDIAEmbeddings(model=EMBEDDING_MODEL)
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Load docs
|
||||
loader = PyPDFLoader("https://www.ssa.gov/news/press/factsheets/basicfact-alt.pdf")
|
||||
data = loader.load()
|
||||
|
||||
# Split docs
|
||||
text_splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=100)
|
||||
docs = text_splitter.split_documents(data)
|
||||
|
||||
# Insert the documents in Milvus Vector Store
|
||||
vector_db = Milvus.from_documents(
|
||||
docs,
|
||||
embeddings,
|
||||
collection_name=COLLECTION_NAME,
|
||||
connection_args={"host": HOST, "port": PORT},
|
||||
)
|
||||
@@ -0,0 +1,3 @@
|
||||
from nvidia_rag_canonical.chain import chain, ingest
|
||||
|
||||
__all__ = ["chain", "ingest"]
|
||||
91
templates/nvidia-rag-canonical/nvidia_rag_canonical/chain.py
Normal file
91
templates/nvidia-rag-canonical/nvidia_rag_canonical/chain.py
Normal file
@@ -0,0 +1,91 @@
|
||||
import getpass
|
||||
import os
|
||||
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
from langchain_community.document_loaders import PyPDFLoader
|
||||
from langchain_community.vectorstores import Milvus
|
||||
from langchain_core.output_parsers import StrOutputParser
|
||||
from langchain_core.prompts import ChatPromptTemplate
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_core.runnables import (
|
||||
RunnableLambda,
|
||||
RunnableParallel,
|
||||
RunnablePassthrough,
|
||||
)
|
||||
from langchain_nvidia_aiplay import ChatNVIDIA, NVIDIAEmbeddings
|
||||
|
||||
EMBEDDING_MODEL = "nvolveqa_40k"
|
||||
CHAT_MODEL = "llama2_13b"
|
||||
HOST = "127.0.0.1"
|
||||
PORT = "19530"
|
||||
COLLECTION_NAME = "test"
|
||||
INGESTION_CHUNK_SIZE = 500
|
||||
INGESTION_CHUNK_OVERLAP = 0
|
||||
|
||||
if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
|
||||
print("Valid NVIDIA_API_KEY already in environment. Delete to reset")
|
||||
else:
|
||||
nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ")
|
||||
assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key"
|
||||
os.environ["NVIDIA_API_KEY"] = nvapi_key
|
||||
|
||||
# Read from Milvus Vector Store
|
||||
embeddings = NVIDIAEmbeddings(model=EMBEDDING_MODEL)
|
||||
vectorstore = Milvus(
|
||||
connection_args={"host": HOST, "port": PORT},
|
||||
collection_name=COLLECTION_NAME,
|
||||
embedding_function=embeddings,
|
||||
)
|
||||
retriever = vectorstore.as_retriever()
|
||||
|
||||
# RAG prompt
|
||||
template = """<s>[INST] <<SYS>>
|
||||
Use the following context to answer the user's question. If you don't know the answer,
|
||||
just say that you don't know, don't try to make up an answer.
|
||||
<</SYS>>
|
||||
<s>[INST] Context: {context} Question: {question} Only return the helpful
|
||||
answer below and nothing else. Helpful answer:[/INST]"
|
||||
"""
|
||||
prompt = ChatPromptTemplate.from_template(template)
|
||||
|
||||
# RAG
|
||||
model = ChatNVIDIA(model=CHAT_MODEL)
|
||||
chain = (
|
||||
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
|
||||
| prompt
|
||||
| model
|
||||
| StrOutputParser()
|
||||
)
|
||||
|
||||
|
||||
# Add typing for input
|
||||
class Question(BaseModel):
|
||||
__root__: str
|
||||
|
||||
|
||||
chain = chain.with_types(input_type=Question)
|
||||
|
||||
|
||||
def _ingest(url: str) -> dict:
|
||||
"""Load and ingest the PDF file from the URL"""
|
||||
|
||||
loader = PyPDFLoader(url)
|
||||
data = loader.load()
|
||||
|
||||
# Split docs
|
||||
text_splitter = CharacterTextSplitter(
|
||||
chunk_size=INGESTION_CHUNK_SIZE, chunk_overlap=INGESTION_CHUNK_OVERLAP
|
||||
)
|
||||
docs = text_splitter.split_documents(data)
|
||||
|
||||
# Insert the documents in Milvus Vector Store
|
||||
_ = Milvus.from_documents(
|
||||
documents=docs,
|
||||
embedding=embeddings,
|
||||
collection_name=COLLECTION_NAME,
|
||||
connection_args={"host": HOST, "port": PORT},
|
||||
)
|
||||
return {}
|
||||
|
||||
|
||||
ingest = RunnableLambda(_ingest)
|
||||
2258
templates/nvidia-rag-canonical/poetry.lock
generated
Normal file
2258
templates/nvidia-rag-canonical/poetry.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
29
templates/nvidia-rag-canonical/pyproject.toml
Normal file
29
templates/nvidia-rag-canonical/pyproject.toml
Normal file
@@ -0,0 +1,29 @@
|
||||
[tool.poetry]
|
||||
name = "nvidia-rag-canonical"
|
||||
version = "0.1.0"
|
||||
description = "RAG with NVIDIA"
|
||||
authors = ["Sagar Bogadi Manjunath <sbogadimanju@nvidia.com>"]
|
||||
readme = "README.md"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
langchain = "^0.1"
|
||||
pymilvus = ">=2.3.0"
|
||||
langchain-nvidia-aiplay = "^0.0.2"
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
langchain-cli = ">=0.0.20"
|
||||
|
||||
[tool.langserve]
|
||||
export_module = "nvidia_rag_canonical"
|
||||
export_attr = "chain"
|
||||
|
||||
[tool.templates-hub]
|
||||
use-case = "rag"
|
||||
author = "LangChain"
|
||||
integrations = ["Milvus", "NVIDIA"]
|
||||
tags = ["vectordbs"]
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
0
templates/nvidia-rag-canonical/tests/__init__.py
Normal file
0
templates/nvidia-rag-canonical/tests/__init__.py
Normal file
1
templates/robocorp-action-server/.gitignore
vendored
Normal file
1
templates/robocorp-action-server/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
__pycache__
|
||||
21
templates/robocorp-action-server/LICENSE
Normal file
21
templates/robocorp-action-server/LICENSE
Normal file
@@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 LangChain, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
81
templates/robocorp-action-server/README.md
Normal file
81
templates/robocorp-action-server/README.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# Langchain - Robocorp Action Server
|
||||
|
||||
This template enables using [Robocorp Action Server](https://github.com/robocorp/robocorp) served actions as tools for an Agent.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this package, you should first have the LangChain CLI installed:
|
||||
|
||||
```shell
|
||||
pip install -U langchain-cli
|
||||
```
|
||||
|
||||
To create a new LangChain project and install this as the only package, you can do:
|
||||
|
||||
```shell
|
||||
langchain app new my-app --package robocorp-action-server
|
||||
```
|
||||
|
||||
If you want to add this to an existing project, you can just run:
|
||||
|
||||
```shell
|
||||
langchain app add robocorp-action-server
|
||||
```
|
||||
|
||||
And add the following code to your `server.py` file:
|
||||
|
||||
```python
|
||||
from robocorp_action_server import agent_executor as action_server_chain
|
||||
|
||||
add_routes(app, action_server_chain, path="/robocorp-action-server")
|
||||
```
|
||||
|
||||
### Running the Action Server
|
||||
|
||||
To run the Action Server, you need to have the Robocorp Action Server installed
|
||||
|
||||
```bash
|
||||
pip install -U robocorp-action-server
|
||||
```
|
||||
|
||||
Then you can run the Action Server with:
|
||||
|
||||
```bash
|
||||
action-server new
|
||||
cd ./your-project-name
|
||||
action-server start
|
||||
```
|
||||
|
||||
### Configure LangSmith (Optional)
|
||||
|
||||
LangSmith will help us trace, monitor and debug LangChain applications.
|
||||
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
|
||||
If you don't have access, you can skip this section
|
||||
|
||||
```shell
|
||||
export LANGCHAIN_TRACING_V2=true
|
||||
export LANGCHAIN_API_KEY=<your-api-key>
|
||||
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
|
||||
```
|
||||
|
||||
### Start LangServe instance
|
||||
|
||||
If you are inside this directory, then you can spin up a LangServe instance directly by:
|
||||
|
||||
```shell
|
||||
langchain serve
|
||||
```
|
||||
|
||||
This will start the FastAPI app with a server is running locally at
|
||||
[http://localhost:8000](http://localhost:8000)
|
||||
|
||||
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
|
||||
We can access the playground at [http://127.0.0.1:8000/robocorp-action-server/playground](http://127.0.0.1:8000/robocorp-action-server/playground)
|
||||
|
||||
We can access the template from code with:
|
||||
|
||||
```python
|
||||
from langserve.client import RemoteRunnable
|
||||
|
||||
runnable = RemoteRunnable("http://localhost:8000/robocorp-action-server")
|
||||
```
|
||||
1924
templates/robocorp-action-server/poetry.lock
generated
Normal file
1924
templates/robocorp-action-server/poetry.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
25
templates/robocorp-action-server/pyproject.toml
Normal file
25
templates/robocorp-action-server/pyproject.toml
Normal file
@@ -0,0 +1,25 @@
|
||||
[tool.poetry]
|
||||
name = "robocorp-action-server"
|
||||
version = "0.0.1"
|
||||
description = ""
|
||||
authors = ["Robocorp Technologies <info@robocorp.com>"]
|
||||
readme = "README.md"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
langchain = "^0.1"
|
||||
langchain-openai = ">=0.0.2,<0.1"
|
||||
langchain-robocorp = ">=0.0.1,<0.1"
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
langchain-cli = ">=0.0.20"
|
||||
fastapi = "^0.104.0"
|
||||
sse-starlette = "^1.6.5"
|
||||
|
||||
[tool.langserve]
|
||||
export_module = "robocorp_action_server"
|
||||
export_attr = "agent_executor"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
@@ -0,0 +1,3 @@
|
||||
from robocorp_action_server.agent import agent_executor
|
||||
|
||||
__all__ = ["agent_executor"]
|
||||
@@ -0,0 +1,36 @@
|
||||
from langchain.agents import AgentExecutor, OpenAIFunctionsAgent
|
||||
from langchain_core.messages import SystemMessage
|
||||
from langchain_core.pydantic_v1 import BaseModel
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langchain_robocorp import ActionServerToolkit
|
||||
|
||||
# Initialize LLM chat model
|
||||
llm = ChatOpenAI(model="gpt-4", temperature=0)
|
||||
|
||||
# Initialize Action Server Toolkit
|
||||
toolkit = ActionServerToolkit(url="http://localhost:8080")
|
||||
tools = toolkit.get_tools()
|
||||
|
||||
# Initialize Agent
|
||||
system_message = SystemMessage(content="You are a helpful assistant")
|
||||
prompt = OpenAIFunctionsAgent.create_prompt(system_message)
|
||||
agent = OpenAIFunctionsAgent(
|
||||
llm=llm,
|
||||
prompt=prompt,
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
# Initialize Agent executor
|
||||
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
|
||||
|
||||
|
||||
# Typings for Langserve playground
|
||||
class Input(BaseModel):
|
||||
input: str
|
||||
|
||||
|
||||
class Output(BaseModel):
|
||||
output: str
|
||||
|
||||
|
||||
agent_executor = agent_executor.with_types(input_type=Input, output_type=Output) # type: ignore[arg-type, assignment]
|
||||
0
templates/robocorp-action-server/tests/__init__.py
Normal file
0
templates/robocorp-action-server/tests/__init__.py
Normal file
Reference in New Issue
Block a user