mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-19 21:35:33 +00:00
Compare commits
1 Commits
v0.0.139
...
vwp/openap
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b86a873e15 |
@@ -2,7 +2,7 @@
|
||||
|
||||
⚡ Building applications with LLMs through composability ⚡
|
||||
|
||||
[](https://github.com/hwchase17/langchain/actions/workflows/lint.yml) [](https://github.com/hwchase17/langchain/actions/workflows/test.yml) [](https://github.com/hwchase17/langchain/actions/workflows/linkcheck.yml) [](https://pepy.tech/project/langchain) [](https://opensource.org/licenses/MIT) [](https://twitter.com/langchainai) [](https://discord.gg/6adMQxSpJS)
|
||||
[](https://github.com/hwchase17/langchain/actions/workflows/lint.yml) [](https://github.com/hwchase17/langchain/actions/workflows/test.yml) [](https://github.com/hwchase17/langchain/actions/workflows/linkcheck.yml) [](https://opensource.org/licenses/MIT) [](https://twitter.com/langchainai) [](https://discord.gg/6adMQxSpJS)
|
||||
|
||||
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
|
||||
Please fill out [this form](https://forms.gle/57d8AmXBYp8PP8tZA) and we'll set up a dedicated support Slack channel.
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Getting API Credentials\n",
|
||||
"# Getting API Credentials\n",
|
||||
"\n",
|
||||
"We'll be using quite some APIs in this notebook, here is a list and where to get them:\n",
|
||||
"\n",
|
||||
@@ -47,7 +47,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setting Up"
|
||||
"# Setting Up"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -103,7 +103,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Scenario 1: Just an LLM\n",
|
||||
"# Scenario 1: Just an LLM\n",
|
||||
"\n",
|
||||
"First, let's just run a single LLM a few times and capture the resulting prompt-answer conversation in ClearML"
|
||||
]
|
||||
@@ -361,7 +361,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Scenario 2: Creating an agent with tools\n",
|
||||
"# Scenario 2: Creating a agent with tools\n",
|
||||
"\n",
|
||||
"To show a more advanced workflow, let's create an agent with access to tools. The way ClearML tracks the results is not different though, only the table will look slightly different as there are other types of actions taken when compared to the earlier, simpler example.\n",
|
||||
"\n",
|
||||
@@ -542,7 +542,7 @@
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tips and Next Steps\n",
|
||||
"# Tips and Next Steps\n",
|
||||
"\n",
|
||||
"- Make sure you always use a unique `name` argument for the `clearml_callback.flush_tracker` function. If not, the model parameters used for a run will override the previous run!\n",
|
||||
"\n",
|
||||
|
||||
@@ -1,352 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Comet"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
""
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"In this guide we will demonstrate how to track your Langchain Experiments, Evaluation Metrics, and LLM Sessions with [Comet](https://www.comet.com/site/?utm_source=langchain&utm_medium=referral&utm_campaign=comet_notebook). \n",
|
||||
"\n",
|
||||
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/hwchase17/langchain/blob/master/docs/ecosystem/comet_tracking.ipynb\">\n",
|
||||
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
|
||||
"</a>\n",
|
||||
"\n",
|
||||
"**Example Project:** [Comet with LangChain](https://www.comet.com/examples/comet-example-langchain/view/b5ZThK6OFdhKWVSP3fDfRtrNF/panels?utm_source=langchain&utm_medium=referral&utm_campaign=comet_notebook)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"<img width=\"1280\" alt=\"comet-langchain\" src=\"https://user-images.githubusercontent.com/7529846/230326720-a9711435-9c6f-4edb-a707-94b67271ab25.png\">\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Install Comet and Dependencies"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install comet_ml\n",
|
||||
"!pip install langchain\n",
|
||||
"!pip install openai\n",
|
||||
"!pip install google-search-results"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Initialize Comet and Set your Credentials"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You can grab your [Comet API Key here](https://www.comet.com/signup?utm_source=langchain&utm_medium=referral&utm_campaign=comet_notebook) or click the link after intializing Comet"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import comet_ml\n",
|
||||
"\n",
|
||||
"comet_ml.init(project_name=\"comet-example-langchain\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Set OpenAI and SerpAPI credentials"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You will need an [OpenAI API Key](https://platform.openai.com/account/api-keys) and a [SerpAPI API Key](https://serpapi.com/dashboard) to run the following examples"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"%env OPENAI_API_KEY=\"...\"\n",
|
||||
"%env SERPAPI_API_KEY=\"...\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scenario 1: Using just an LLM"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from datetime import datetime\n",
|
||||
"\n",
|
||||
"from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
|
||||
"from langchain.callbacks.base import CallbackManager\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"comet_callback = CometCallbackHandler(\n",
|
||||
" project_name=\"comet-example-langchain\",\n",
|
||||
" complexity_metrics=True,\n",
|
||||
" stream_logs=True,\n",
|
||||
" tags=[\"llm\"],\n",
|
||||
" visualizations=[\"dep\"],\n",
|
||||
")\n",
|
||||
"manager = CallbackManager([StdOutCallbackHandler(), comet_callback])\n",
|
||||
"llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)\n",
|
||||
"\n",
|
||||
"llm_result = llm.generate([\"Tell me a joke\", \"Tell me a poem\", \"Tell me a fact\"] * 3)\n",
|
||||
"print(\"LLM result\", llm_result)\n",
|
||||
"comet_callback.flush_tracker(llm, finish=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scenario 2: Using an LLM in a Chain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
|
||||
"from langchain.callbacks.base import CallbackManager\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"comet_callback = CometCallbackHandler(\n",
|
||||
" complexity_metrics=True,\n",
|
||||
" project_name=\"comet-example-langchain\",\n",
|
||||
" stream_logs=True,\n",
|
||||
" tags=[\"synopsis-chain\"],\n",
|
||||
")\n",
|
||||
"manager = CallbackManager([StdOutCallbackHandler(), comet_callback])\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)\n",
|
||||
"\n",
|
||||
"template = \"\"\"You are a playwright. Given the title of play, it is your job to write a synopsis for that title.\n",
|
||||
"Title: {title}\n",
|
||||
"Playwright: This is a synopsis for the above play:\"\"\"\n",
|
||||
"prompt_template = PromptTemplate(input_variables=[\"title\"], template=template)\n",
|
||||
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)\n",
|
||||
"\n",
|
||||
"test_prompts = [{\"title\": \"Documentary about Bigfoot in Paris\"}]\n",
|
||||
"synopsis_chain.apply(test_prompts)\n",
|
||||
"comet_callback.flush_tracker(synopsis_chain, finish=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scenario 3: Using An Agent with Tools "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import initialize_agent, load_tools\n",
|
||||
"from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
|
||||
"from langchain.callbacks.base import CallbackManager\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"\n",
|
||||
"comet_callback = CometCallbackHandler(\n",
|
||||
" project_name=\"comet-example-langchain\",\n",
|
||||
" complexity_metrics=True,\n",
|
||||
" stream_logs=True,\n",
|
||||
" tags=[\"agent\"],\n",
|
||||
")\n",
|
||||
"manager = CallbackManager([StdOutCallbackHandler(), comet_callback])\n",
|
||||
"llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)\n",
|
||||
"\n",
|
||||
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm, callback_manager=manager)\n",
|
||||
"agent = initialize_agent(\n",
|
||||
" tools,\n",
|
||||
" llm,\n",
|
||||
" agent=\"zero-shot-react-description\",\n",
|
||||
" callback_manager=manager,\n",
|
||||
" verbose=True,\n",
|
||||
")\n",
|
||||
"agent.run(\n",
|
||||
" \"Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?\"\n",
|
||||
")\n",
|
||||
"comet_callback.flush_tracker(agent, finish=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Scenario 4: Using Custom Evaluation Metrics"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `CometCallbackManager` also allows you to define and use Custom Evaluation Metrics to assess generated outputs from your model. Let's take a look at how this works. \n",
|
||||
"\n",
|
||||
"\n",
|
||||
"In the snippet below, we will use the [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) metric to evaluate the quality of a generated summary of an input prompt. "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!pip install rouge-score"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from rouge_score import rouge_scorer\n",
|
||||
"\n",
|
||||
"from langchain.callbacks import CometCallbackHandler, StdOutCallbackHandler\n",
|
||||
"from langchain.callbacks.base import CallbackManager\n",
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"class Rouge:\n",
|
||||
" def __init__(self, reference):\n",
|
||||
" self.reference = reference\n",
|
||||
" self.scorer = rouge_scorer.RougeScorer([\"rougeLsum\"], use_stemmer=True)\n",
|
||||
"\n",
|
||||
" def compute_metric(self, generation, prompt_idx, gen_idx):\n",
|
||||
" prediction = generation.text\n",
|
||||
" results = self.scorer.score(target=self.reference, prediction=prediction)\n",
|
||||
"\n",
|
||||
" return {\n",
|
||||
" \"rougeLsum_score\": results[\"rougeLsum\"].fmeasure,\n",
|
||||
" \"reference\": self.reference,\n",
|
||||
" }\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"reference = \"\"\"\n",
|
||||
"The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building.\n",
|
||||
"It was the first structure to reach a height of 300 metres.\n",
|
||||
"\n",
|
||||
"It is now taller than the Chrysler Building in New York City by 5.2 metres (17 ft)\n",
|
||||
"Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France .\n",
|
||||
"\"\"\"\n",
|
||||
"rouge_score = Rouge(reference=reference)\n",
|
||||
"\n",
|
||||
"template = \"\"\"Given the following article, it is your job to write a summary.\n",
|
||||
"Article:\n",
|
||||
"{article}\n",
|
||||
"Summary: This is the summary for the above article:\"\"\"\n",
|
||||
"prompt_template = PromptTemplate(input_variables=[\"article\"], template=template)\n",
|
||||
"\n",
|
||||
"comet_callback = CometCallbackHandler(\n",
|
||||
" project_name=\"comet-example-langchain\",\n",
|
||||
" complexity_metrics=False,\n",
|
||||
" stream_logs=True,\n",
|
||||
" tags=[\"custom_metrics\"],\n",
|
||||
" custom_metrics=rouge_score.compute_metric,\n",
|
||||
")\n",
|
||||
"manager = CallbackManager([StdOutCallbackHandler(), comet_callback])\n",
|
||||
"llm = OpenAI(temperature=0.9, callback_manager=manager, verbose=True)\n",
|
||||
"\n",
|
||||
"synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callback_manager=manager)\n",
|
||||
"\n",
|
||||
"test_prompts = [\n",
|
||||
" {\n",
|
||||
" \"article\": \"\"\"\n",
|
||||
" The tower is 324 metres (1,063 ft) tall, about the same height as\n",
|
||||
" an 81-storey building, and the tallest structure in Paris. Its base is square,\n",
|
||||
" measuring 125 metres (410 ft) on each side.\n",
|
||||
" During its construction, the Eiffel Tower surpassed the\n",
|
||||
" Washington Monument to become the tallest man-made structure in the world,\n",
|
||||
" a title it held for 41 years until the Chrysler Building\n",
|
||||
" in New York City was finished in 1930.\n",
|
||||
"\n",
|
||||
" It was the first structure to reach a height of 300 metres.\n",
|
||||
" Due to the addition of a broadcasting aerial at the top of the tower in 1957,\n",
|
||||
" it is now taller than the Chrysler Building by 5.2 metres (17 ft).\n",
|
||||
"\n",
|
||||
" Excluding transmitters, the Eiffel Tower is the second tallest\n",
|
||||
" free-standing structure in France after the Millau Viaduct.\n",
|
||||
" \"\"\"\n",
|
||||
" }\n",
|
||||
"]\n",
|
||||
"synopsis_chain.apply(test_prompts)\n",
|
||||
"comet_callback.flush_tracker(synopsis_chain, finish=True)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -77,7 +77,7 @@
|
||||
" Returns:\n",
|
||||
" Action specifying what tool to use.\n",
|
||||
" \"\"\"\n",
|
||||
" return AgentAction(tool=\"Search\", tool_input=kwargs[\"input\"], log=\"\")\n",
|
||||
" return AgentAction(tool=\"Search\", tool_input=\"foo\", log=\"\")\n",
|
||||
"\n",
|
||||
" async def aplan(\n",
|
||||
" self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n",
|
||||
@@ -92,7 +92,7 @@
|
||||
" Returns:\n",
|
||||
" Action specifying what tool to use.\n",
|
||||
" \"\"\"\n",
|
||||
" return AgentAction(tool=\"Search\", tool_input=kwargs[\"input\"], log=\"\")"
|
||||
" return AgentAction(tool=\"Search\", tool_input=\"foo\", log=\"\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -42,7 +42,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": 11,
|
||||
"id": "9af9734e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -67,7 +67,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 28,
|
||||
"id": "becda2a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -99,7 +99,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 7,
|
||||
"id": "339b1bb8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -128,7 +128,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": 22,
|
||||
"id": "fd969d31",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -159,7 +159,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": 23,
|
||||
"id": "798ef9fb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -187,7 +187,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": 13,
|
||||
"id": "7c6fe0d3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -216,7 +216,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": 14,
|
||||
"id": "d278706a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -236,7 +236,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"execution_count": 16,
|
||||
"id": "f9d4c374",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -268,7 +268,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"execution_count": 24,
|
||||
"id": "9b1cc2a2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -279,7 +279,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"execution_count": 25,
|
||||
"id": "e4f5092f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -305,7 +305,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"execution_count": 26,
|
||||
"id": "490604e9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -315,7 +315,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"execution_count": 27,
|
||||
"id": "653b1617",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -326,12 +326,11 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada in 2023\n",
|
||||
"Action: Search\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction: Search\n",
|
||||
"Action Input: Population of Canada in 2023\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3mThe current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||||
"Final Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!\u001b[0m\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3m38,648,380\u001b[0m\u001b[32;1m\u001b[1;3m That's a lot of people!\n",
|
||||
"Final Answer: Arrr, there be 38,648,380 people livin' in Canada come 2023!\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
@@ -339,10 +338,10 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Arrr, there be 38,658,314 people livin' in Canada as of 2023!\""
|
||||
"\"Arrr, there be 38,648,380 people livin' in Canada come 2023!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"execution_count": 27,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -351,203 +350,10 @@
|
||||
"agent_executor.run(\"How many people live in canada as of 2023?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "d5b4a078",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Adding Memory\n",
|
||||
"\n",
|
||||
"If you want to add memory to the agent, you'll need to:\n",
|
||||
"\n",
|
||||
"1. Add a place in the custom prompt for the chat_history\n",
|
||||
"2. Add a memory object to the agent executor."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 29,
|
||||
"id": "94fffda1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set up the base template\n",
|
||||
"template_with_history = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
|
||||
"\n",
|
||||
"{tools}\n",
|
||||
"\n",
|
||||
"Use the following format:\n",
|
||||
"\n",
|
||||
"Question: the input question you must answer\n",
|
||||
"Thought: you should always think about what to do\n",
|
||||
"Action: the action to take, should be one of [{tool_names}]\n",
|
||||
"Action Input: the input to the action\n",
|
||||
"Observation: the result of the action\n",
|
||||
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
|
||||
"Thought: I now know the final answer\n",
|
||||
"Final Answer: the final answer to the original input question\n",
|
||||
"\n",
|
||||
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\n",
|
||||
"\n",
|
||||
"Previous conversation history:\n",
|
||||
"{history}\n",
|
||||
"\n",
|
||||
"New question: {input}\n",
|
||||
"{agent_scratchpad}\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 30,
|
||||
"id": "f58488d7",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt_with_history = CustomPromptTemplate(\n",
|
||||
" template=template_with_history,\n",
|
||||
" tools=tools,\n",
|
||||
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
|
||||
" # This includes the `intermediate_steps` variable because that is needed\n",
|
||||
" input_variables=[\"input\", \"intermediate_steps\", \"history\"]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"id": "d28d4b5a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm_chain = LLMChain(llm=llm, prompt=prompt_with_history)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 32,
|
||||
"id": "3e37b32a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tool_names = [tool.name for tool in tools]\n",
|
||||
"agent = LLMSingleActionAgent(\n",
|
||||
" llm_chain=llm_chain, \n",
|
||||
" output_parser=output_parser,\n",
|
||||
" stop=[\"\\nObservation:\"], \n",
|
||||
" allowed_tools=tool_names\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 33,
|
||||
"id": "97ea1bce",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.memory import ConversationBufferWindowMemory"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 42,
|
||||
"id": "b5ad69ce",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory=ConversationBufferWindowMemory(k=2)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 43,
|
||||
"id": "b7b5c9b1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, memory=memory)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 44,
|
||||
"id": "5ec4c39b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThought: I need to find out the population of Canada in 2023\n",
|
||||
"Action: Search\n",
|
||||
"Action Input: Population of Canada in 2023\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3mThe current population of Canada is 38,658,314 as of Wednesday, April 12, 2023, based on Worldometer elaboration of the latest United Nations data.\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer\n",
|
||||
"Final Answer: Arrr, there be 38,658,314 people livin' in Canada as of 2023!\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Arrr, there be 38,658,314 people livin' in Canada as of 2023!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 44,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.run(\"How many people live in canada as of 2023?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 45,
|
||||
"id": "b2ba45bb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThought: I need to find out how many people live in Mexico.\n",
|
||||
"Action: Search\n",
|
||||
"Action Input: How many people live in Mexico as of 2023?\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3mThe current population of Mexico is 132,679,922 as of Tuesday, April 11, 2023, based on Worldometer elaboration of the latest United Nations data. Mexico 2020 ...\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
|
||||
"Final Answer: Arrr, there be 132,679,922 people livin' in Mexico as of 2023!\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Arrr, there be 132,679,922 people livin' in Mexico as of 2023!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 45,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.run(\"how about in mexico?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "bd820a7a",
|
||||
"id": "adefb4c2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
|
||||
@@ -0,0 +1,519 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c7ad998d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Tool Retrieval over large Natural Language APIs\n",
|
||||
"\n",
|
||||
"This tutorial assumes familiarity with [Natural Language API Toolkits](../../toolkits/examples/openapi_nla.ipynb). NLAToolkits parse whole OpenAPI specs, which can be too large to reasonably fit into an agent's context. This tutorial walks through using a Retriever object to fetch tools.\n",
|
||||
"\n",
|
||||
"### First, import dependencies and load the LLM"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "6593f793",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import re\n",
|
||||
"from typing import Callable, Union\n",
|
||||
"\n",
|
||||
"from langchain import OpenAI, LLMChain\n",
|
||||
"from langchain.prompts import StringPromptTemplate\n",
|
||||
"from langchain.requests import Requests\n",
|
||||
"from langchain.agents import AgentExecutor, AgentOutputParser, AgentType, initialize_agent, LLMSingleActionAgent\n",
|
||||
"from langchain.agents.agent_toolkits import NLAToolkit\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "37141072",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"spoonacular_api_key = \"\" # Copy from the API Console at https://spoonacular.com/food-api/console#Profile"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "dd720860",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Select the LLM to use. Here, we use text-davinci-003\n",
|
||||
"llm = OpenAI(temperature=0, max_tokens=700) # You can swap between different core LLM's here."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4cadac9d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"source": [
|
||||
"### Next, load the Natural Language API Toolkits"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "6b208ab0",
|
||||
"metadata": {
|
||||
"scrolled": true,
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Accept. Valid values are ['path', 'query'] Ignoring optional parameter\n",
|
||||
"Unsupported APIPropertyLocation \"header\" for parameter Content-Type. Valid values are ['path', 'query'] Ignoring optional parameter\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"speak_toolkit = NLAToolkit.from_llm_and_url(llm, \"https://api.speak.com/openapi.yaml\")\n",
|
||||
"klarna_toolkit = NLAToolkit.from_llm_and_url(llm, \"https://www.klarna.com/us/shopping/public/openai/v0/api-docs/\")\n",
|
||||
"\n",
|
||||
"# Add the API key for authenticating to the API\n",
|
||||
"requests = Requests(headers={\"x-api-key\": spoonacular_api_key})\n",
|
||||
"spoonacular_toolkit = NLAToolkit.from_llm_and_url(\n",
|
||||
" llm, \n",
|
||||
" \"https://spoonacular.com/application/frontend/downloads/spoonacular-openapi-3.json\",\n",
|
||||
" requests=requests,\n",
|
||||
" max_text_length=1800, # If you want to truncate the response text\n",
|
||||
")\n",
|
||||
"toolkits = (speak_toolkit, spoonacular_toolkit, klarna_toolkit)\n",
|
||||
"ALL_TOOLS = [tool for toolkit in toolkits for tool in toolkit.get_tools()]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "e5dfc494",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tool Retriever\n",
|
||||
"\n",
|
||||
"We will use a vectorstore to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "f7c29e82",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.schema import Document"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "5cd940a8",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"docs = [Document(page_content=t.description, metadata={\"index\": i}) for i, t in enumerate(ALL_TOOLS)]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "19d77004",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"vector_store = FAISS.from_documents(docs, OpenAIEmbeddings())"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "3d84c14d-c3eb-4381-8852-fd6175b02239",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Create the retriever object\n",
|
||||
"retriever = vector_store.as_retriever()\n",
|
||||
"\n",
|
||||
"def get_tools(query):\n",
|
||||
" docs = retriever.get_relevant_documents(query)\n",
|
||||
" return [ALL_TOOLS[d.metadata[\"index\"]] for d in docs]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "d37782ab-a74c-4cd0-8712-ba9166152206",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"('Speak.explainPhrase',\n",
|
||||
" \"I'm an AI from Speak. Instruct what you want, and I'll assist via an API with description: Explain the meaning and usage of a specific foreign language phrase that the user is asking about.\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"tool = get_tools(\"How would I ask 'What is the most important thing?' in Maori?\")[0]\n",
|
||||
"tool.name, tool.description"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "88814a90-f1fe-49e7-88e4-fcd662142cfb",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"('spoonacular_API.ingredientSearch',\n",
|
||||
" \"I'm an AI from spoonacular API. Instruct what you want, and I'll assist via an API with description: Search for simple whole foods (e.g. fruits, vegetables, nuts, grains, meat, fish, dairy etc.).\")"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"tool = get_tools(\"What's a good vegetarian Thanksgiving dish?\")[0]\n",
|
||||
"tool.name, tool.description"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "16c7336f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Create the Agent"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "3bc3de7c-dca9-4e29-9efb-dadb714c7100",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set up a prompt template\n",
|
||||
"class CustomPromptTemplate(StringPromptTemplate):\n",
|
||||
" # The template to use\n",
|
||||
" template: str\n",
|
||||
" ############## NEW ######################\n",
|
||||
" # The list of tools available\n",
|
||||
" tools_getter: Callable\n",
|
||||
" \n",
|
||||
" def format(self, **kwargs) -> str:\n",
|
||||
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
|
||||
" # Format them in a particular way\n",
|
||||
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
|
||||
" thoughts = \"\"\n",
|
||||
" for action, observation in intermediate_steps:\n",
|
||||
" thoughts += action.log\n",
|
||||
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
|
||||
" # Set the agent_scratchpad variable to that value\n",
|
||||
" kwargs[\"agent_scratchpad\"] = thoughts\n",
|
||||
" ############## NEW ######################\n",
|
||||
" tools = self.tools_getter(kwargs['input'])\n",
|
||||
" # Create a tools variable from the list of tools provided\n",
|
||||
" kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])\n",
|
||||
" # Create a list of tool names for the tools provided\n",
|
||||
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in tools])\n",
|
||||
" return self.template.format(**kwargs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "9302880c-ad1d-490e-912f-bb166387c200",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Slightly tweak the instructions from the default agent\n",
|
||||
"template = \"\"\"Answer the following questions as best you can. You have access to the following tools:\n",
|
||||
"\n",
|
||||
"{tools}\n",
|
||||
"\n",
|
||||
"Use the following format:\n",
|
||||
"\n",
|
||||
"Question: the input question you must answer\n",
|
||||
"Thought: you should always think about what to do\n",
|
||||
"Action: the action to take, should be one of [{tool_names}]\n",
|
||||
"Action Input: full description of what you want to accomplish so the tool AI can assist.\n",
|
||||
"Observation: The Agent's response\n",
|
||||
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
|
||||
"Thought: I now know the final answer. User can't see any of my observations, API responses, links, or tools.\n",
|
||||
"Final Answer: the final answer to the original input question with the right amount of detail\n",
|
||||
"\n",
|
||||
"When responding with your Final Answer, remember that the person you are responding to CANNOT see any of your Thought/Action/Action Input/Observations, so if there is any relevant information there you need to include it explicitly in your response.\n",
|
||||
"Begin!\n",
|
||||
"\n",
|
||||
"Question: {input}\n",
|
||||
"Thought:{agent_scratchpad}\"\"\"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "36dedac3-94c2-4f02-b3ed-f44793c84eac",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = CustomPromptTemplate(\n",
|
||||
" template=template,\n",
|
||||
" tools_getter=get_tools,\n",
|
||||
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
|
||||
" # This includes the `intermediate_steps` variable because that is needed\n",
|
||||
" input_variables=[\"input\", \"intermediate_steps\"]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "8f2fe3cb-1060-4e71-8c9f-31cbb1931f1a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class CustomOutputParser(AgentOutputParser):\n",
|
||||
" \n",
|
||||
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
|
||||
" # Check if agent should finish\n",
|
||||
" if \"Final Answer:\" in llm_output:\n",
|
||||
" return AgentFinish(\n",
|
||||
" # Return values is generally always a dictionary with a single `output` key\n",
|
||||
" # It is not recommended to try anything else at the moment :)\n",
|
||||
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
|
||||
" log=llm_output,\n",
|
||||
" )\n",
|
||||
" # Parse out the action and action input\n",
|
||||
" regex = r\"Action: (.*?)[\\n]*Action Input:[\\s]*(.*)\"\n",
|
||||
" match = re.search(regex, llm_output, re.DOTALL)\n",
|
||||
" if not match:\n",
|
||||
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
|
||||
" action = match.group(1).strip()\n",
|
||||
" action_input = match.group(2)\n",
|
||||
" # Return the action and action input\n",
|
||||
" return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "544ae3eb-9e10-40c9-b134-de752761f4f2",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"output_parser = CustomOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4609a1ce-3f72-403e-bf83-958491f5805a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up LLM, stop sequence, and the agent\n",
|
||||
"\n",
|
||||
"Also the same as the previous notebook"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "af14b84e-2713-4515-bc3c-c602b5527a06",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# LLM chain consisting of the LLM and a prompt\n",
|
||||
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "bc534f60-5042-454a-afd0-c2809b82fa6a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tool_names = [tool.name for tool in ALL_TOOLS]\n",
|
||||
"agent = LLMSingleActionAgent(\n",
|
||||
" llm_chain=llm_chain, \n",
|
||||
" output_parser=output_parser,\n",
|
||||
" stop=[\"\\nObservation:\"], \n",
|
||||
" allowed_tools=tool_names,\n",
|
||||
" verbose=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "7283d902-f682-4631-aacc-a37616999de9",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=ALL_TOOLS, verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "ffe44871-25b0-4ef5-89a4-5a643fa6425d",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Make the query more complex!\n",
|
||||
"user_input = (\n",
|
||||
" \"My Spanish relatives are coming and I need to pick some good wine and some food. Also, any advice on how to talk about it to them would be much appreciated\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "2f36278e-ec09-4237-8b7f-c9859ccd12e0",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m I need to find a wine and a dish that go well together, and also provide some advice on how to talk about it\n",
|
||||
"Action: spoonacular_API.getWinePairing\n",
|
||||
"Action Input: \"Spanish cuisine\"\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3mI attempted to call an API to find a wine pairing for Spanish cuisine, but the API returned an error saying it could not find a pairing. It may be that there is no known wine pairing for Spanish cuisine.\u001b[0m\u001b[32;1m\u001b[1;3m I should try to find a specific dish and then find a wine that goes well with it\n",
|
||||
"Action: spoonacular_API.getWinePairing\n",
|
||||
"Action Input: \"paella\"\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3mWhen pairing wine with Spanish dishes, it is recommended to follow the rule 'what grows together goes together'. For paella, we recommend albariño for white wine and garnachan and tempranillo for red. One wine you could try is Becker Vineyards Tempranillo, which has 4.4 out of 5 stars and a bottle costs about 18 dollars.\u001b[0m\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
|
||||
"Final Answer: For your Spanish relatives, you should try pairing paella with Becker Vineyards Tempranillo. This red wine has 4.4 out of 5 stars and a bottle costs about 18 dollars. When pairing wine with Spanish dishes, it is recommended to follow the rule 'what grows together goes together'.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"For your Spanish relatives, you should try pairing paella with Becker Vineyards Tempranillo. This red wine has 4.4 out of 5 stars and a bottle costs about 18 dollars. When pairing wine with Spanish dishes, it is recommended to follow the rule 'what grows together goes together'.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 20,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.run(user_input)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "a2959462",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Thank you!"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -80,8 +80,8 @@
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm = ChatOpenAI(temperature=0)\n",
|
||||
"tools = load_tools([\"requests_all\"] )\n",
|
||||
"llm = ChatOpenAI(temperature=0,)\n",
|
||||
"tools = load_tools([\"requests\"] )\n",
|
||||
"tools += [tool]\n",
|
||||
"\n",
|
||||
"agent_chain = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\n",
|
||||
|
||||
@@ -144,160 +144,6 @@
|
||||
"\u001b[32;1m\u001b[1;3mYou are a helpful AI Assistant. Please provide JSON arguments to agentFunc() based on the user's instructions.\n",
|
||||
"\n",
|
||||
"API_SCHEMA: ```typescript\n",
|
||||
"/* API for fetching Klarna product information */\n",
|
||||
"type productsUsingGET = (_: {\n",
|
||||
"/* A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. */\n",
|
||||
"\t\tq: string,\n",
|
||||
"/* number of products returned */\n",
|
||||
"\t\tsize?: number,\n",
|
||||
"/* (Optional) Minimum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */\n",
|
||||
"\t\tmin_price?: number,\n",
|
||||
"/* (Optional) Maximum price in local currency for the product searched for. Either explicitly stated by the user or implicitly inferred from a combination of the user's request and the kind of product searched for. */\n",
|
||||
"\t\tmax_price?: number,\n",
|
||||
"}) => any;\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"USER_INSTRUCTIONS: \"whats the most expensive shirt?\"\n",
|
||||
"\n",
|
||||
"Your arguments must be plain json provided in a markdown block:\n",
|
||||
"\n",
|
||||
"ARGS: ```json\n",
|
||||
"{valid json conforming to API_SCHEMA}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Example\n",
|
||||
"-----\n",
|
||||
"\n",
|
||||
"ARGS: ```json\n",
|
||||
"{\"foo\": \"bar\", \"baz\": {\"qux\": \"quux\"}}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"The block must be no more than 1 line long, and all arguments must be valid JSON. All string arguments must be wrapped in double quotes.\n",
|
||||
"You MUST strictly comply to the types indicated by the provided schema, including all required args.\n",
|
||||
"\n",
|
||||
"If you don't have sufficient information to call the function due to things like requiring specific uuid's, you can reply with the following message:\n",
|
||||
"\n",
|
||||
"Message: ```text\n",
|
||||
"Concise response requesting the additional information that would make calling the function successful.\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Begin\n",
|
||||
"-----\n",
|
||||
"ARGS:\n",
|
||||
"\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m{\"q\": \"shirt\", \"size\": 1, \"max_price\": null}\u001b[0m\n",
|
||||
"\u001b[36;1m\u001b[1;3m{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]}]}\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new APIResponderChain chain...\u001b[0m\n",
|
||||
"Prompt after formatting:\n",
|
||||
"\u001b[32;1m\u001b[1;3mYou are a helpful AI assistant trained to answer user queries from API responses.\n",
|
||||
"You attempted to call an API, which resulted in:\n",
|
||||
"API_RESPONSE: {\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]}]}\n",
|
||||
"\n",
|
||||
"USER_COMMENT: \"whats the most expensive shirt?\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"If the API_RESPONSE can answer the USER_COMMENT respond with the following markdown json block:\n",
|
||||
"Response: ```json\n",
|
||||
"{\"response\": \"Human-understandable synthesis of the API_RESPONSE\"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Otherwise respond with the following markdown json block:\n",
|
||||
"Response Error: ```json\n",
|
||||
"{\"response\": \"What you did and a concise statement of the resulting error. If it can be easily fixed, provide a suggestion.\"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"You MUST respond as a markdown json code block. The person you are responding to CANNOT see the API_RESPONSE, so if there is any relevant information there you must include it in your response.\n",
|
||||
"\n",
|
||||
"Begin:\n",
|
||||
"---\n",
|
||||
"\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\u001b[33;1m\u001b[1;3mThe most expensive shirt in the API response is the Burberry Check Poplin Shirt, which costs $360.00.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"output = chain(\"whats the most expensive shirt?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "c000295e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'request_args': '{\"q\": \"shirt\", \"size\": 1, \"max_price\": null}',\n",
|
||||
" 'response_text': '{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]}]}'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# View intermediate steps\n",
|
||||
"output[\"intermediate_steps\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "092bdb4d",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Return raw response\n",
|
||||
"\n",
|
||||
"We can also run this chain without synthesizing the response. This will have the effect of just returning the raw API output."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "4dff3849",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = OpenAPIEndpointChain.from_api_operation(\n",
|
||||
" operation, \n",
|
||||
" llm, \n",
|
||||
" requests=Requests(), \n",
|
||||
" verbose=True,\n",
|
||||
" return_intermediate_steps=True, # Return request and response text\n",
|
||||
" raw_response=True # Return raw response\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "762499a9",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new OpenAPIEndpointChain chain...\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new APIRequesterChain chain...\u001b[0m\n",
|
||||
"Prompt after formatting:\n",
|
||||
"\u001b[32;1m\u001b[1;3mYou are a helpful AI Assistant. Please provide JSON arguments to agentFunc() based on the user's instructions.\n",
|
||||
"\n",
|
||||
"API_SCHEMA: ```typescript\n",
|
||||
"/* API for fetching Klarna product information */\n",
|
||||
"type productsUsingGET = (_: {\n",
|
||||
"/* A precise query that matches one very small category or product that needs to be searched for to find the products the user is looking for. If the user explicitly stated what they want, use that as a query. The query is as specific as possible to the product name or category mentioned by the user in its singular form, and don't contain any clarifiers like latest, newest, cheapest, budget, premium, expensive or similar. The query is always taken from the latest topic, if there is a new topic a new query is started. */\n",
|
||||
"\t\tq: string,\n",
|
||||
@@ -341,7 +187,36 @@
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m{\"q\": \"shirt\", \"max_price\": null}\u001b[0m\n",
|
||||
"\u001b[36;1m\u001b[1;3m{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Cotton Shirt - Beige\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$229.02\",\"attributes\":[\"Material:Cotton,Elastane\",\"Color:Beige\",\"Model:Boy\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Stretch Cotton Twill Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202342515/Clothing/Burberry-Vintage-Check-Stretch-Cotton-Twill-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$309.99\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Woman\",\"Color:Beige\",\"Properties:Stretch\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Somerton Check Shirt - Camel\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$450.00\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Man\",\"Color:Beige\"]},{\"name\":\"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$19.99\",\"attributes\":[\"Material:Polyester,Nylon\",\"Target Group:Man\",\"Color:Red,Pink,White,Blue,Purple,Beige,Black,Green\",\"Properties:Pockets\",\"Pattern:Solid Color\"]}]}\u001b[0m\n",
|
||||
"\u001b[36;1m\u001b[1;3m{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Cotton Shirt - Beige\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$196.30\",\"attributes\":[\"Material:Cotton,Elastane\",\"Color:Beige\",\"Model:Boy\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Somerton Check Shirt - Camel\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$450.00\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Man\",\"Color:Beige\"]},{\"name\":\"Calvin Klein Slim Fit Oxford Dress Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201839169/Clothing/Calvin-Klein-Slim-Fit-Oxford-Dress-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$24.91\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,White,Blue,Black\",\"Pattern:Solid Color\"]},{\"name\":\"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$19.99\",\"attributes\":[\"Material:Polyester,Nylon\",\"Target Group:Man\",\"Color:Red,Pink,White,Blue,Purple,Beige,Black,Green\",\"Properties:Pockets\",\"Pattern:Solid Color\"]}]}\u001b[0m\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new APIResponderChain chain...\u001b[0m\n",
|
||||
"Prompt after formatting:\n",
|
||||
"\u001b[32;1m\u001b[1;3mYou are a helpful AI assistant trained to answer user queries from API responses.\n",
|
||||
"You attempted to call an API, which resulted in:\n",
|
||||
"API_RESPONSE: {\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Cotton Shirt - Beige\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$196.30\",\"attributes\":[\"Material:Cotton,Elastane\",\"Color:Beige\",\"Model:Boy\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Somerton Check Shirt - Camel\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$450.00\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Man\",\"Color:Beige\"]},{\"name\":\"Calvin Klein Slim Fit Oxford Dress Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201839169/Clothing/Calvin-Klein-Slim-Fit-Oxford-Dress-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$24.91\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,White,Blue,Black\",\"Pattern:Solid Color\"]},{\"name\":\"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$19.99\",\"attributes\":[\"Material:Polyester,Nylon\",\"Target Group:Man\",\"Color:Red,Pink,White,Blue,Purple,Beige,Black,Green\",\"Properties:Pockets\",\"Pattern:Solid Color\"]}]}\n",
|
||||
"\n",
|
||||
"USER_COMMENT: \"whats the most expensive shirt?\"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"If the API_RESPONSE can answer the USER_COMMENT respond with the following markdown json block:\n",
|
||||
"Response: ```json\n",
|
||||
"{\"response\": \"Concise response to USER_COMMENT based on API_RESPONSE.\"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"Otherwise respond with the following markdown json block:\n",
|
||||
"Response Error: ```json\n",
|
||||
"{\"response\": \"What you did and a concise statement of the resulting error. If it can be easily fixed, provide a suggestion.\"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"You MUST respond as a markdown json code block.\n",
|
||||
"\n",
|
||||
"Begin:\n",
|
||||
"---\n",
|
||||
"\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\u001b[33;1m\u001b[1;3mThe most expensive shirt in this list is the 'Burberry Somerton Check Shirt - Camel' which is priced at $450.00\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
@@ -353,26 +228,25 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "4afc021a",
|
||||
"execution_count": 8,
|
||||
"id": "c000295e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'instructions': 'whats the most expensive shirt?',\n",
|
||||
" 'output': '{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Cotton Shirt - Beige\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$229.02\",\"attributes\":[\"Material:Cotton,Elastane\",\"Color:Beige\",\"Model:Boy\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Stretch Cotton Twill Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202342515/Clothing/Burberry-Vintage-Check-Stretch-Cotton-Twill-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$309.99\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Woman\",\"Color:Beige\",\"Properties:Stretch\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Somerton Check Shirt - Camel\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$450.00\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Man\",\"Color:Beige\"]},{\"name\":\"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$19.99\",\"attributes\":[\"Material:Polyester,Nylon\",\"Target Group:Man\",\"Color:Red,Pink,White,Blue,Purple,Beige,Black,Green\",\"Properties:Pockets\",\"Pattern:Solid Color\"]}]}',\n",
|
||||
" 'intermediate_steps': {'request_args': '{\"q\": \"shirt\", \"max_price\": null}',\n",
|
||||
" 'response_text': '{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Cotton Shirt - Beige\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$229.02\",\"attributes\":[\"Material:Cotton,Elastane\",\"Color:Beige\",\"Model:Boy\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Stretch Cotton Twill Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3202342515/Clothing/Burberry-Vintage-Check-Stretch-Cotton-Twill-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$309.99\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Woman\",\"Color:Beige\",\"Properties:Stretch\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Somerton Check Shirt - Camel\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$450.00\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Man\",\"Color:Beige\"]},{\"name\":\"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$19.99\",\"attributes\":[\"Material:Polyester,Nylon\",\"Target Group:Man\",\"Color:Red,Pink,White,Blue,Purple,Beige,Black,Green\",\"Properties:Pockets\",\"Pattern:Solid Color\"]}]}'}}"
|
||||
"['{\"q\": \"shirt\", \"max_price\": null}',\n",
|
||||
" '{\"products\":[{\"name\":\"Burberry Check Poplin Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201810981/Clothing/Burberry-Check-Poplin-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$360.00\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,Blue,Beige\",\"Properties:Pockets\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Vintage Check Cotton Shirt - Beige\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl359/3200280807/Children-s-Clothing/Burberry-Vintage-Check-Cotton-Shirt-Beige/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$196.30\",\"attributes\":[\"Material:Cotton,Elastane\",\"Color:Beige\",\"Model:Boy\",\"Pattern:Checkered\"]},{\"name\":\"Burberry Somerton Check Shirt - Camel\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201112728/Clothing/Burberry-Somerton-Check-Shirt-Camel/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$450.00\",\"attributes\":[\"Material:Elastane/Lycra/Spandex,Cotton\",\"Target Group:Man\",\"Color:Beige\"]},{\"name\":\"Calvin Klein Slim Fit Oxford Dress Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3201839169/Clothing/Calvin-Klein-Slim-Fit-Oxford-Dress-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$24.91\",\"attributes\":[\"Material:Cotton\",\"Target Group:Man\",\"Color:Gray,White,Blue,Black\",\"Pattern:Solid Color\"]},{\"name\":\"Magellan Outdoors Laguna Madre Solid Short Sleeve Fishing Shirt\",\"url\":\"https://www.klarna.com/us/shopping/pl/cl10001/3203102142/Clothing/Magellan-Outdoors-Laguna-Madre-Solid-Short-Sleeve-Fishing-Shirt/?utm_source=openai&ref-site=openai_plugin\",\"price\":\"$19.99\",\"attributes\":[\"Material:Polyester,Nylon\",\"Target Group:Man\",\"Color:Red,Pink,White,Blue,Purple,Beige,Black,Green\",\"Properties:Pockets\",\"Pattern:Solid Color\"]}]}']"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"output"
|
||||
"# View intermediate steps\n",
|
||||
"output[\"intermediate_steps\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -574,7 +448,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.11.2"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "ab66dd43",
|
||||
"metadata": {},
|
||||
@@ -10,12 +9,12 @@
|
||||
"\n",
|
||||
"This notebook goes over how to use a retriever that under the hood uses Pinecone and Hybrid Search.\n",
|
||||
"\n",
|
||||
"The logic of this retriever is taken from [this documentaion](https://docs.pinecone.io/docs/hybrid-search)"
|
||||
"The logic of this retriever is largely taken from [this blog post](https://www.pinecone.io/learn/hybrid-search-intro/)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 75,
|
||||
"execution_count": 1,
|
||||
"id": "393ac030",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -32,61 +31,43 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "15390796",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import pinecone # !pip install pinecone-client\n",
|
||||
"\n",
|
||||
"pinecone.init(\n",
|
||||
" api_key=\"...\", # API key here\n",
|
||||
" environment=\"...\" # find next to api key in console\n",
|
||||
")\n",
|
||||
"# choose a name for your index\n",
|
||||
"index_name = \"...\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "95d5d7f9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"You should only have to do this part once.\n",
|
||||
"\n",
|
||||
"Note: it's important to make sure that the \"context\" field that holds the document text in the metadata is not indexed. Currently you need to specify explicitly the fields you do want to index. For more information checkout Pinecone's [docs](https://docs.pinecone.io/docs/manage-indexes#selective-metadata-indexing)."
|
||||
"You should only have to do this part once."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 76,
|
||||
"id": "3b8f7697",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"WhoAmIResponse(username='load', user_label='label', projectname='load-test')"
|
||||
]
|
||||
},
|
||||
"execution_count": 76,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import pinecone\n",
|
||||
"\n",
|
||||
"api_key = os.getenv(\"PINECONE_API_KEY\") or \"PINECONE_API_KEY\"\n",
|
||||
"# find environment next to your API key in the Pinecone console\n",
|
||||
"env = os.getenv(\"PINECONE_ENVIRONMENT\") or \"PINECONE_ENVIRONMENT\"\n",
|
||||
"\n",
|
||||
"index_name = \"langchain-pinecone-hybrid-search\"\n",
|
||||
"\n",
|
||||
"pinecone.init(api_key=api_key, enviroment=env)\n",
|
||||
"pinecone.whoami()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 77,
|
||||
"execution_count": null,
|
||||
"id": "cfa3a8d8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
" # create the index\n",
|
||||
"# create the index\n",
|
||||
"pinecone.create_index(\n",
|
||||
" name = index_name,\n",
|
||||
" dimension = 1536, # dimensionality of dense model\n",
|
||||
" metric = \"dotproduct\", # sparse values supported only for dotproduct\n",
|
||||
" pod_type = \"s1\",\n",
|
||||
" metadata_config={\"indexed\": []} # see explaination above\n",
|
||||
" metric = \"dotproduct\",\n",
|
||||
" pod_type = \"s1\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
@@ -100,7 +81,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 78,
|
||||
"execution_count": 4,
|
||||
"id": "bcb3c8c2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -109,19 +90,18 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "dbc025d6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Get embeddings and sparse encoders\n",
|
||||
"## Get embeddings and tokenizers\n",
|
||||
"\n",
|
||||
"Embeddings are used for the dense vectors, tokenizer is used for the sparse vector"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 79,
|
||||
"execution_count": 5,
|
||||
"id": "2f63c911",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -130,51 +110,19 @@
|
||||
"embeddings = OpenAIEmbeddings()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "96bf8879",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"To encode the text to sparse values you can either choose SPLADE or BM25. For out of domain tasks we recommend using BM25.\n",
|
||||
"\n",
|
||||
"For more information about the sparse encoders you can checkout pinecone-text library [docs](https://pinecone-io.github.io/pinecone-text/pinecone_text.html)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 80,
|
||||
"execution_count": 6,
|
||||
"id": "c3f030e5",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from pinecone_text.sparse import BM25Encoder\n",
|
||||
"# or from pinecone_text.sparse import SpladeEncoder if you wish to work with SPLADE\n",
|
||||
"from transformers import BertTokenizerFast # !pip install transformers\n",
|
||||
"\n",
|
||||
"# use default tf-idf values\n",
|
||||
"bm25_encoder = BM25Encoder().default()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "23601ddb",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The above code is using default tfids values. It's highly recommended to fit the tf-idf values to your own corpus. You can do it as follow:\n",
|
||||
"\n",
|
||||
"```python\n",
|
||||
"corpus = [\"foo\", \"bar\", \"world\", \"hello\"]\n",
|
||||
"\n",
|
||||
"# fit tf-idf values on your corpus\n",
|
||||
"bm25_encoder.fit(corpus)\n",
|
||||
"\n",
|
||||
"# store the values to a json file\n",
|
||||
"bm25_encoder.dump(\"bm25_values.json\")\n",
|
||||
"\n",
|
||||
"# load to your BM25Encoder object\n",
|
||||
"bm25_encoder = BM25Encoder().load(\"bm25_values.json\")\n",
|
||||
"```"
|
||||
"# load bert tokenizer from huggingface\n",
|
||||
"tokenizer = BertTokenizerFast.from_pretrained(\n",
|
||||
" 'bert-base-uncased'\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -189,12 +137,12 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 81,
|
||||
"execution_count": 7,
|
||||
"id": "ac77d835",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"retriever = PineconeHybridSearchRetriever(embeddings=embeddings, sparse_encoder=bm25_encoder, index=index)"
|
||||
"retriever = PineconeHybridSearchRetriever(embeddings=embeddings, index=index, tokenizer=tokenizer)"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -209,16 +157,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 82,
|
||||
"execution_count": 8,
|
||||
"id": "98b1c017",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"100%|██████████| 1/1 [00:02<00:00, 2.27s/it]\n"
|
||||
]
|
||||
"data": {
|
||||
"application/vnd.jupyter.widget-view+json": {
|
||||
"model_id": "4d6f3ee7ca754d07a1a18d100d99e0cd",
|
||||
"version_major": 2,
|
||||
"version_minor": 0
|
||||
},
|
||||
"text/plain": [
|
||||
" 0%| | 0/1 [00:00<?, ?it/s]"
|
||||
]
|
||||
},
|
||||
"metadata": {},
|
||||
"output_type": "display_data"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
@@ -237,7 +192,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 83,
|
||||
"execution_count": 9,
|
||||
"id": "c0455218",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -247,7 +202,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 84,
|
||||
"execution_count": 10,
|
||||
"id": "7dfa5c29",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -257,7 +212,7 @@
|
||||
"Document(page_content='foo', metadata={})"
|
||||
]
|
||||
},
|
||||
"execution_count": 84,
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
@@ -265,11 +220,19 @@
|
||||
"source": [
|
||||
"result[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "74bd9256",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": ".venv",
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
@@ -283,12 +246,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.13"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "7ec0d8babd8cabf695a1d94b1e586d626e046c9df609f6bad065d15d49f67f54"
|
||||
}
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -60,14 +60,14 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CPU times: user 14.2 ms, sys: 4.9 ms, total: 19.1 ms\n",
|
||||
"Wall time: 1.1 s\n"
|
||||
"CPU times: user 30.7 ms, sys: 18.6 ms, total: 49.3 ms\n",
|
||||
"Wall time: 791 ms\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
|
||||
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
@@ -91,14 +91,14 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CPU times: user 162 µs, sys: 7 µs, total: 169 µs\n",
|
||||
"Wall time: 175 µs\n"
|
||||
"CPU times: user 80 µs, sys: 0 ns, total: 80 µs\n",
|
||||
"Wall time: 83.9 µs\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
|
||||
"\"\\n\\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
@@ -252,249 +252,6 @@
|
||||
"llm(\"Tell me a joke\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "684eab55",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## GPTCache\n",
|
||||
"\n",
|
||||
"We can use [GPTCache](https://github.com/zilliztech/GPTCache) for exact match caching OR to cache results based on semantic similarity\n",
|
||||
"\n",
|
||||
"Let's first start with an example of exact match"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "14a82124",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import gptcache\n",
|
||||
"from gptcache.processor.pre import get_prompt\n",
|
||||
"from gptcache.manager.factory import get_data_manager\n",
|
||||
"from langchain.cache import GPTCache\n",
|
||||
"\n",
|
||||
"# Avoid multiple caches using the same file, causing different llm model caches to affect each other\n",
|
||||
"i = 0\n",
|
||||
"file_prefix = \"data_map\"\n",
|
||||
"\n",
|
||||
"def init_gptcache_map(cache_obj: gptcache.Cache):\n",
|
||||
" global i\n",
|
||||
" cache_path = f'{file_prefix}_{i}.txt'\n",
|
||||
" cache_obj.init(\n",
|
||||
" pre_embedding_func=get_prompt,\n",
|
||||
" data_manager=get_data_manager(data_path=cache_path),\n",
|
||||
" )\n",
|
||||
" i += 1\n",
|
||||
"\n",
|
||||
"langchain.llm_cache = GPTCache(init_gptcache_map)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "9e4ecfd1",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CPU times: user 8.6 ms, sys: 3.82 ms, total: 12.4 ms\n",
|
||||
"Wall time: 881 ms\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"# The first time, it is not yet in cache, so it should take longer\n",
|
||||
"llm(\"Tell me a joke\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "c98bbe3b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CPU times: user 286 µs, sys: 21 µs, total: 307 µs\n",
|
||||
"Wall time: 316 µs\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"# The second time it is, so it goes faster\n",
|
||||
"llm(\"Tell me a joke\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "502b6076",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's now show an example of similarity caching"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "b3c663bb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import gptcache\n",
|
||||
"from gptcache.processor.pre import get_prompt\n",
|
||||
"from gptcache.manager.factory import get_data_manager\n",
|
||||
"from langchain.cache import GPTCache\n",
|
||||
"from gptcache.manager import get_data_manager, CacheBase, VectorBase\n",
|
||||
"from gptcache import Cache\n",
|
||||
"from gptcache.embedding import Onnx\n",
|
||||
"from gptcache.similarity_evaluation.distance import SearchDistanceEvaluation\n",
|
||||
"\n",
|
||||
"# Avoid multiple caches using the same file, causing different llm model caches to affect each other\n",
|
||||
"i = 0\n",
|
||||
"file_prefix = \"data_map\"\n",
|
||||
"llm_cache = Cache()\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"def init_gptcache_map(cache_obj: gptcache.Cache):\n",
|
||||
" global i\n",
|
||||
" cache_path = f'{file_prefix}_{i}.txt'\n",
|
||||
" onnx = Onnx()\n",
|
||||
" cache_base = CacheBase('sqlite')\n",
|
||||
" vector_base = VectorBase('faiss', dimension=onnx.dimension)\n",
|
||||
" data_manager = get_data_manager(cache_base, vector_base, max_size=10, clean_size=2)\n",
|
||||
" cache_obj.init(\n",
|
||||
" pre_embedding_func=get_prompt,\n",
|
||||
" embedding_func=onnx.to_embeddings,\n",
|
||||
" data_manager=data_manager,\n",
|
||||
" similarity_evaluation=SearchDistanceEvaluation(),\n",
|
||||
" )\n",
|
||||
" i += 1\n",
|
||||
"\n",
|
||||
"langchain.llm_cache = GPTCache(init_gptcache_map)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "8c273ced",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CPU times: user 1.01 s, sys: 153 ms, total: 1.16 s\n",
|
||||
"Wall time: 2.49 s\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"# The first time, it is not yet in cache, so it should take longer\n",
|
||||
"llm(\"Tell me a joke\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "93e21a5f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CPU times: user 745 ms, sys: 13.2 ms, total: 758 ms\n",
|
||||
"Wall time: 136 ms\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"# This is an exact match, so it finds it in the cache\n",
|
||||
"llm(\"Tell me a joke\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "c4bb024b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"CPU times: user 737 ms, sys: 7.79 ms, total: 745 ms\n",
|
||||
"Wall time: 135 ms\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'\\n\\nWhy did the chicken cross the road?\\n\\nTo get to the other side.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"%%time\n",
|
||||
"# This is not an exact match, but semantically within distance so it hits!\n",
|
||||
"llm(\"Tell me joke\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "934943dc",
|
||||
|
||||
@@ -1,538 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ba5f8741",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Custom Agent with PlugIn Retrieval\n",
|
||||
"\n",
|
||||
"This notebook combines two concepts in order to build a custom agent that can interact with AI Plugins:\n",
|
||||
"\n",
|
||||
"1. [Custom Agent with Retrieval](../../modules/agents/agents/custom_agent_with_plugin_retrieval.ipynb): This introduces the concept of retrieving many tools, which is useful when trying to work with arbitrarily many plugins.\n",
|
||||
"2. [Natural Language API Chains](../../modules/chains/examples/openapi.ipynb): This creates Natural Language wrappers around OpenAPI endpoints. This is useful because (1) plugins use OpenAPI endpoints under the hood, (2) wrapping them in an NLAChain allows the router agent to call it more easily.\n",
|
||||
"\n",
|
||||
"The novel idea introduced in this notebook is the idea of using retrieval to select not the tools explicitly, but the set of OpenAPI specs to use. We can then generate tools from those OpenAPI specs. The use case for this is when trying to get agents to use plugins. It may be more efficient to choose plugins first, then the endpoints, rather than the endpoints directly. This is because the plugins may contain more useful information for selection."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fea4812c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up environment\n",
|
||||
"\n",
|
||||
"Do necessary imports, etc."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "9af9734e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser\n",
|
||||
"from langchain.prompts import StringPromptTemplate\n",
|
||||
"from langchain import OpenAI, SerpAPIWrapper, LLMChain\n",
|
||||
"from typing import List, Union\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish\n",
|
||||
"from langchain.agents.agent_toolkits import NLAToolkit\n",
|
||||
"from langchain.tools.plugin import AIPlugin\n",
|
||||
"import re"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2f91d8b4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Setup LLM"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "a1a3b59c",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6df0253f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up plugins\n",
|
||||
"\n",
|
||||
"Load and index plugins"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "becda2a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"urls = [\n",
|
||||
" \"https://datasette.io/.well-known/ai-plugin.json\",\n",
|
||||
" \"https://api.speak.com/.well-known/ai-plugin.json\",\n",
|
||||
" \"https://www.wolframalpha.com/.well-known/ai-plugin.json\",\n",
|
||||
" \"https://www.zapier.com/.well-known/ai-plugin.json\",\n",
|
||||
" \"https://www.klarna.com/.well-known/ai-plugin.json\",\n",
|
||||
" \"https://www.joinmilo.com/.well-known/ai-plugin.json\",\n",
|
||||
" \"https://slack.com/.well-known/ai-plugin.json\",\n",
|
||||
" \"https://schooldigger.com/.well-known/ai-plugin.json\",\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"AI_PLUGINS = [AIPlugin.from_url(url) for url in urls]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "17362717",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Tool Retriever\n",
|
||||
"\n",
|
||||
"We will use a vectorstore to create embeddings for each tool description. Then, for an incoming query we can create embeddings for that query and do a similarity search for relevant tools."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "77c4be4b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.vectorstores import FAISS\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.schema import Document"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "9092a158",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.2 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load an OpenAPI 3.0.1 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n",
|
||||
"Attempting to load a Swagger 2.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"embeddings = OpenAIEmbeddings()\n",
|
||||
"docs = [\n",
|
||||
" Document(page_content=plugin.description_for_model, \n",
|
||||
" metadata={\"plugin_name\": plugin.name_for_model}\n",
|
||||
" )\n",
|
||||
" for plugin in AI_PLUGINS\n",
|
||||
"]\n",
|
||||
"vector_store = FAISS.from_documents(docs, embeddings)\n",
|
||||
"toolkits_dict = {plugin.name_for_model: \n",
|
||||
" NLAToolkit.from_llm_and_ai_plugin(llm, plugin) \n",
|
||||
" for plugin in AI_PLUGINS}"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "735a7566",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"retriever = vector_store.as_retriever()\n",
|
||||
"\n",
|
||||
"def get_tools(query):\n",
|
||||
" # Get documents, which contain the Plugins to use\n",
|
||||
" docs = retriever.get_relevant_documents(query)\n",
|
||||
" # Get the toolkits, one for each plugin\n",
|
||||
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
|
||||
" # Get the tools: a separate NLAChain for each endpoint\n",
|
||||
" tools = []\n",
|
||||
" for tk in tool_kits:\n",
|
||||
" tools.extend(tk.nla_tools)\n",
|
||||
" return tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7699afd7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can now test this retriever to see if it seems to work."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "425f2886",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['Milo.askMilo',\n",
|
||||
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
|
||||
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
|
||||
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
|
||||
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
|
||||
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
|
||||
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
|
||||
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
|
||||
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
|
||||
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
|
||||
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
|
||||
" 'SchoolDigger_API_V2.0.Schools_GetSchool20',\n",
|
||||
" 'Speak.translate',\n",
|
||||
" 'Speak.explainPhrase',\n",
|
||||
" 'Speak.explainTask']"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"tools = get_tools(\"What could I do today with my kiddo\")\n",
|
||||
"[t.name for t in tools]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "3aa88768",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"['Open_AI_Klarna_product_Api.productsUsingGET',\n",
|
||||
" 'Milo.askMilo',\n",
|
||||
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.search_all_actions',\n",
|
||||
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.preview_a_zap',\n",
|
||||
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.get_configuration_link',\n",
|
||||
" 'Zapier_Natural_Language_Actions_(NLA)_API_(Dynamic)_-_Beta.list_exposed_actions',\n",
|
||||
" 'SchoolDigger_API_V2.0.Autocomplete_GetSchools',\n",
|
||||
" 'SchoolDigger_API_V2.0.Districts_GetAllDistricts2',\n",
|
||||
" 'SchoolDigger_API_V2.0.Districts_GetDistrict2',\n",
|
||||
" 'SchoolDigger_API_V2.0.Rankings_GetSchoolRank2',\n",
|
||||
" 'SchoolDigger_API_V2.0.Rankings_GetRank_District',\n",
|
||||
" 'SchoolDigger_API_V2.0.Schools_GetAllSchools20',\n",
|
||||
" 'SchoolDigger_API_V2.0.Schools_GetSchool20']"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"tools = get_tools(\"what shirts can i buy?\")\n",
|
||||
"[t.name for t in tools]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2e7a075c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prompt Template\n",
|
||||
"\n",
|
||||
"The prompt template is pretty standard, because we're not actually changing that much logic in the actual prompt template, but rather we are just changing how retrieval is done."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "339b1bb8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set up the base template\n",
|
||||
"template = \"\"\"Answer the following questions as best you can, but speaking as a pirate might speak. You have access to the following tools:\n",
|
||||
"\n",
|
||||
"{tools}\n",
|
||||
"\n",
|
||||
"Use the following format:\n",
|
||||
"\n",
|
||||
"Question: the input question you must answer\n",
|
||||
"Thought: you should always think about what to do\n",
|
||||
"Action: the action to take, should be one of [{tool_names}]\n",
|
||||
"Action Input: the input to the action\n",
|
||||
"Observation: the result of the action\n",
|
||||
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
|
||||
"Thought: I now know the final answer\n",
|
||||
"Final Answer: the final answer to the original input question\n",
|
||||
"\n",
|
||||
"Begin! Remember to speak as a pirate when giving your final answer. Use lots of \"Arg\"s\n",
|
||||
"\n",
|
||||
"Question: {input}\n",
|
||||
"{agent_scratchpad}\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "1583acdc",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The custom prompt template now has the concept of a tools_getter, which we call on the input to select the tools to use"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "fd969d31",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from typing import Callable\n",
|
||||
"# Set up a prompt template\n",
|
||||
"class CustomPromptTemplate(StringPromptTemplate):\n",
|
||||
" # The template to use\n",
|
||||
" template: str\n",
|
||||
" ############## NEW ######################\n",
|
||||
" # The list of tools available\n",
|
||||
" tools_getter: Callable\n",
|
||||
" \n",
|
||||
" def format(self, **kwargs) -> str:\n",
|
||||
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
|
||||
" # Format them in a particular way\n",
|
||||
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
|
||||
" thoughts = \"\"\n",
|
||||
" for action, observation in intermediate_steps:\n",
|
||||
" thoughts += action.log\n",
|
||||
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
|
||||
" # Set the agent_scratchpad variable to that value\n",
|
||||
" kwargs[\"agent_scratchpad\"] = thoughts\n",
|
||||
" ############## NEW ######################\n",
|
||||
" tools = self.tools_getter(kwargs[\"input\"])\n",
|
||||
" # Create a tools variable from the list of tools provided\n",
|
||||
" kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])\n",
|
||||
" # Create a list of tool names for the tools provided\n",
|
||||
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in tools])\n",
|
||||
" return self.template.format(**kwargs)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "798ef9fb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = CustomPromptTemplate(\n",
|
||||
" template=template,\n",
|
||||
" tools_getter=get_tools,\n",
|
||||
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
|
||||
" # This includes the `intermediate_steps` variable because that is needed\n",
|
||||
" input_variables=[\"input\", \"intermediate_steps\"]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ef3a1af3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Output Parser\n",
|
||||
"\n",
|
||||
"The output parser is unchanged from the previous notebook, since we are not changing anything about the output format."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "7c6fe0d3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class CustomOutputParser(AgentOutputParser):\n",
|
||||
" \n",
|
||||
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
|
||||
" # Check if agent should finish\n",
|
||||
" if \"Final Answer:\" in llm_output:\n",
|
||||
" return AgentFinish(\n",
|
||||
" # Return values is generally always a dictionary with a single `output` key\n",
|
||||
" # It is not recommended to try anything else at the moment :)\n",
|
||||
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
|
||||
" log=llm_output,\n",
|
||||
" )\n",
|
||||
" # Parse out the action and action input\n",
|
||||
" regex = r\"Action: (.*?)[\\n]*Action Input:[\\s]*(.*)\"\n",
|
||||
" match = re.search(regex, llm_output, re.DOTALL)\n",
|
||||
" if not match:\n",
|
||||
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
|
||||
" action = match.group(1).strip()\n",
|
||||
" action_input = match.group(2)\n",
|
||||
" # Return the action and action input\n",
|
||||
" return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "d278706a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"output_parser = CustomOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "170587b1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up LLM, stop sequence, and the agent\n",
|
||||
"\n",
|
||||
"Also the same as the previous notebook"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "f9d4c374",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = OpenAI(temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "9b1cc2a2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# LLM chain consisting of the LLM and a prompt\n",
|
||||
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "e4f5092f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tool_names = [tool.name for tool in tools]\n",
|
||||
"agent = LLMSingleActionAgent(\n",
|
||||
" llm_chain=llm_chain, \n",
|
||||
" output_parser=output_parser,\n",
|
||||
" stop=[\"\\nObservation:\"], \n",
|
||||
" allowed_tools=tool_names\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "aa8a5326",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use the Agent\n",
|
||||
"\n",
|
||||
"Now we can use it!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "490604e9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "653b1617",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThought: I need to find a product API\n",
|
||||
"Action: Open_AI_Klarna_product_Api.productsUsingGET\n",
|
||||
"Action Input: shirts\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3mI found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.\u001b[0m\u001b[32;1m\u001b[1;3m I now know what shirts I can buy\n",
|
||||
"Final Answer: Arg, I found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Arg, I found 10 shirts from the API response. They range in price from $9.99 to $450.00 and come in a variety of materials, colors, and patterns.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 18,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.run(\"what shirts can i buy?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "2481ee76",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -22,4 +22,3 @@ Specific examples of this include:
|
||||
- [Baby AGI](agents/baby_agi.ipynb): a notebook implementing [BabyAGI](https://github.com/yoheinakajima/babyagi) by Yohei Nakajima as LLM Chains
|
||||
- [Baby AGI with Tools](agents/baby_agi_with_agent.ipynb): building off the above notebook, this example substitutes in an agent with tools as the execution tools, allowing it to actually take actions.
|
||||
- [CAMEL](agents/camel_role_playing.ipynb): an implementation of the CAMEL (Communicative Agents for “Mind” Exploration of Large Scale Language Model Society) paper, where two agents communicate with eachother.
|
||||
- [AI Plugins](agents/custom_agent_with_plugin_retrieval.ipynb): an implementation of an agent that is designed to be able to use all AI Plugins.
|
||||
|
||||
@@ -99,16 +99,6 @@ class BaseSingleActionAgent(BaseModel):
|
||||
f"Got unsupported early_stopping_method `{early_stopping_method}`"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_llm_and_tools(
|
||||
cls,
|
||||
llm: BaseLanguageModel,
|
||||
tools: Sequence[BaseTool],
|
||||
callback_manager: Optional[BaseCallbackManager] = None,
|
||||
**kwargs: Any,
|
||||
) -> BaseSingleActionAgent:
|
||||
raise NotImplementedError
|
||||
|
||||
@property
|
||||
def _agent_type(self) -> str:
|
||||
"""Return Identifier of agent type."""
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
"""Toolkit for interacting with API's using natural language."""
|
||||
from __future__ import annotations
|
||||
|
||||
|
||||
from typing import Any, List, Optional, Sequence
|
||||
|
||||
@@ -11,7 +11,6 @@ from langchain.llms.base import BaseLLM
|
||||
from langchain.requests import Requests
|
||||
from langchain.tools.base import BaseTool
|
||||
from langchain.tools.openapi.utils.openapi_utils import OpenAPISpec
|
||||
from langchain.tools.plugin import AIPlugin
|
||||
|
||||
|
||||
class NLAToolkit(BaseToolkit):
|
||||
@@ -24,18 +23,19 @@ class NLAToolkit(BaseToolkit):
|
||||
"""Get the tools for all the API operations."""
|
||||
return list(self.nla_tools)
|
||||
|
||||
@staticmethod
|
||||
def _get_http_operation_tools(
|
||||
@classmethod
|
||||
def from_llm_and_spec(
|
||||
cls,
|
||||
llm: BaseLLM,
|
||||
spec: OpenAPISpec,
|
||||
requests: Optional[Requests] = None,
|
||||
verbose: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> List[NLATool]:
|
||||
"""Get the tools for all the API operations."""
|
||||
**kwargs: Any
|
||||
) -> "NLAToolkit":
|
||||
"""Instantiate the toolkit by creating tools for each operation."""
|
||||
http_operation_tools: List[NLATool] = []
|
||||
if not spec.paths:
|
||||
return []
|
||||
http_operation_tools = []
|
||||
return cls(nla_tools=http_operation_tools)
|
||||
for path in spec.paths:
|
||||
for method in spec.get_methods_for_path(path):
|
||||
endpoint_tool = NLATool.from_llm_and_method(
|
||||
@@ -45,24 +45,9 @@ class NLAToolkit(BaseToolkit):
|
||||
spec=spec,
|
||||
requests=requests,
|
||||
verbose=verbose,
|
||||
**kwargs,
|
||||
**kwargs
|
||||
)
|
||||
http_operation_tools.append(endpoint_tool)
|
||||
return http_operation_tools
|
||||
|
||||
@classmethod
|
||||
def from_llm_and_spec(
|
||||
cls,
|
||||
llm: BaseLLM,
|
||||
spec: OpenAPISpec,
|
||||
requests: Optional[Requests] = None,
|
||||
verbose: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> NLAToolkit:
|
||||
"""Instantiate the toolkit by creating tools for each operation."""
|
||||
http_operation_tools = cls._get_http_operation_tools(
|
||||
llm=llm, spec=spec, requests=requests, verbose=verbose, **kwargs
|
||||
)
|
||||
return cls(nla_tools=http_operation_tools)
|
||||
|
||||
@classmethod
|
||||
@@ -72,45 +57,10 @@ class NLAToolkit(BaseToolkit):
|
||||
open_api_url: str,
|
||||
requests: Optional[Requests] = None,
|
||||
verbose: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> NLAToolkit:
|
||||
**kwargs: Any
|
||||
) -> "NLAToolkit":
|
||||
"""Instantiate the toolkit from an OpenAPI Spec URL"""
|
||||
spec = OpenAPISpec.from_url(open_api_url)
|
||||
return cls.from_llm_and_spec(
|
||||
llm=llm, spec=spec, requests=requests, verbose=verbose, **kwargs
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_llm_and_ai_plugin(
|
||||
cls,
|
||||
llm: BaseLLM,
|
||||
ai_plugin: AIPlugin,
|
||||
requests: Optional[Requests] = None,
|
||||
verbose: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> NLAToolkit:
|
||||
"""Instantiate the toolkit from an OpenAPI Spec URL"""
|
||||
spec = OpenAPISpec.from_url(ai_plugin.api.url)
|
||||
# TODO: Merge optional Auth information with the `requests` argument
|
||||
return cls.from_llm_and_spec(
|
||||
llm=llm,
|
||||
spec=spec,
|
||||
requests=requests,
|
||||
verbose=verbose,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_llm_and_ai_plugin_url(
|
||||
cls,
|
||||
llm: BaseLLM,
|
||||
ai_plugin_url: str,
|
||||
requests: Optional[Requests] = None,
|
||||
verbose: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> NLAToolkit:
|
||||
"""Instantiate the toolkit from an OpenAPI Spec URL"""
|
||||
plugin = AIPlugin.from_url(ai_plugin_url)
|
||||
return cls.from_llm_and_ai_plugin(
|
||||
llm=llm, ai_plugin=plugin, requests=requests, verbose=verbose, **kwargs
|
||||
)
|
||||
|
||||
@@ -22,9 +22,6 @@ def create_openapi_agent(
|
||||
suffix: str = OPENAPI_SUFFIX,
|
||||
format_instructions: str = FORMAT_INSTRUCTIONS,
|
||||
input_variables: Optional[List[str]] = None,
|
||||
max_iterations: Optional[int] = 15,
|
||||
max_execution_time: Optional[float] = None,
|
||||
early_stopping_method: str = "force",
|
||||
verbose: bool = False,
|
||||
return_intermediate_steps: bool = False,
|
||||
**kwargs: Any,
|
||||
@@ -50,7 +47,4 @@ def create_openapi_agent(
|
||||
tools=toolkit.get_tools(),
|
||||
verbose=verbose,
|
||||
return_intermediate_steps=return_intermediate_steps,
|
||||
max_iterations=max_iterations,
|
||||
max_execution_time=max_execution_time,
|
||||
early_stopping_method=early_stopping_method,
|
||||
)
|
||||
|
||||
@@ -14,13 +14,9 @@ from langchain.agents.agent_toolkits.openapi.planner_prompt import (
|
||||
API_PLANNER_PROMPT,
|
||||
API_PLANNER_TOOL_DESCRIPTION,
|
||||
API_PLANNER_TOOL_NAME,
|
||||
PARSING_DELETE_PROMPT,
|
||||
PARSING_GET_PROMPT,
|
||||
PARSING_PATCH_PROMPT,
|
||||
PARSING_POST_PROMPT,
|
||||
REQUESTS_DELETE_TOOL_DESCRIPTION,
|
||||
REQUESTS_GET_TOOL_DESCRIPTION,
|
||||
REQUESTS_PATCH_TOOL_DESCRIPTION,
|
||||
REQUESTS_POST_TOOL_DESCRIPTION,
|
||||
)
|
||||
from langchain.agents.agent_toolkits.openapi.spec import ReducedOpenAPISpec
|
||||
@@ -94,56 +90,6 @@ class RequestsPostToolWithParsing(BaseRequestsTool, BaseTool):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class RequestsPatchToolWithParsing(BaseRequestsTool, BaseTool):
|
||||
name = "requests_patch"
|
||||
description = REQUESTS_PATCH_TOOL_DESCRIPTION
|
||||
|
||||
response_length: Optional[int] = MAX_RESPONSE_LENGTH
|
||||
llm_chain = LLMChain(
|
||||
llm=OpenAI(),
|
||||
prompt=PARSING_PATCH_PROMPT,
|
||||
)
|
||||
|
||||
def _run(self, text: str) -> str:
|
||||
try:
|
||||
data = json.loads(text)
|
||||
except json.JSONDecodeError as e:
|
||||
raise e
|
||||
response = self.requests_wrapper.patch(data["url"], data["data"])
|
||||
response = response[: self.response_length]
|
||||
return self.llm_chain.predict(
|
||||
response=response, instructions=data["output_instructions"]
|
||||
).strip()
|
||||
|
||||
async def _arun(self, text: str) -> str:
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class RequestsDeleteToolWithParsing(BaseRequestsTool, BaseTool):
|
||||
name = "requests_delete"
|
||||
description = REQUESTS_DELETE_TOOL_DESCRIPTION
|
||||
|
||||
response_length: Optional[int] = MAX_RESPONSE_LENGTH
|
||||
llm_chain = LLMChain(
|
||||
llm=OpenAI(),
|
||||
prompt=PARSING_DELETE_PROMPT,
|
||||
)
|
||||
|
||||
def _run(self, text: str) -> str:
|
||||
try:
|
||||
data = json.loads(text)
|
||||
except json.JSONDecodeError as e:
|
||||
raise e
|
||||
response = self.requests_wrapper.delete(data["url"])
|
||||
response = response[: self.response_length]
|
||||
return self.llm_chain.predict(
|
||||
response=response, instructions=data["output_instructions"]
|
||||
).strip()
|
||||
|
||||
async def _arun(self, text: str) -> str:
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
#
|
||||
# Orchestrator, planner, controller.
|
||||
#
|
||||
@@ -211,7 +157,7 @@ def _create_api_controller_tool(
|
||||
base_url = api_spec.servers[0]["url"] # TODO: do better.
|
||||
|
||||
def _create_and_run_api_controller_agent(plan_str: str) -> str:
|
||||
pattern = r"\b(GET|POST|PATCH|DELETE)\s+(/\S+)*"
|
||||
pattern = r"\b(GET|POST)\s+(/\S+)*"
|
||||
matches = re.findall(pattern, plan_str)
|
||||
endpoint_names = [
|
||||
"{method} {route}".format(method=method, route=route.split("?")[0])
|
||||
|
||||
@@ -2,16 +2,13 @@
|
||||
|
||||
from langchain.prompts.prompt import PromptTemplate
|
||||
|
||||
|
||||
API_PLANNER_PROMPT = """You are a planner that plans a sequence of API calls to assist with user queries against an API.
|
||||
|
||||
You should:
|
||||
1) evaluate whether the user query can be solved by the API documentated below. If no, say why.
|
||||
2) if yes, generate a plan of API calls and say what they are doing step by step.
|
||||
3) If the plan includes a DELETE call, you should always return an ask from the User for authorization first unless the User has specifically asked to delete something.
|
||||
|
||||
You should only use API endpoints documented below ("Endpoints you can use:").
|
||||
You can only use the DELETE tool if the User has specifically asked to delete something. Otherwise, you should return a request authorization from the User first.
|
||||
Some user queries can be resolved in a single API call, but some will require several API calls.
|
||||
The plan will be passed to an API controller that can format it into web requests and return the responses.
|
||||
|
||||
@@ -23,31 +20,15 @@ Fake endpoints for examples:
|
||||
GET /user to get information about the current user
|
||||
GET /products/search search across products
|
||||
POST /users/{{id}}/cart to add products to a user's cart
|
||||
PATCH /users/{{id}}/cart to update a user's cart
|
||||
DELETE /users/{{id}}/cart to delete a user's cart
|
||||
|
||||
User query: tell me a joke
|
||||
Plan: Sorry, this API's domain is shopping, not comedy.
|
||||
|
||||
Usery query: I want to buy a couch
|
||||
Plan: 1. GET /products with a query param to search for couches
|
||||
Plan: 1. GET /products/search to search for couches
|
||||
2. GET /user to find the user's id
|
||||
3. POST /users/{{id}}/cart to add a couch to the user's cart
|
||||
|
||||
User query: I want to add a lamp to my cart
|
||||
Plan: 1. GET /products with a query param to search for lamps
|
||||
2. GET /user to find the user's id
|
||||
3. PATCH /users/{{id}}/cart to add a lamp to the user's cart
|
||||
|
||||
User query: I want to delete my cart
|
||||
Plan: 1. GET /user to find the user's id
|
||||
2. DELETE required. Did user specify DELETE or previously authorize? Yes, proceed.
|
||||
3. DELETE /users/{{id}}/cart to delete the user's cart
|
||||
|
||||
User query: I want to start a new cart
|
||||
Plan: 1. GET /user to find the user's id
|
||||
2. DELETE required. Did user specify DELETE or previously authorize? No, ask for authorization.
|
||||
3. Are you sure you want to delete your cart?
|
||||
----
|
||||
|
||||
Here are endpoints you can use. Do not reference any of the endpoints above.
|
||||
@@ -102,7 +83,6 @@ API_CONTROLLER_TOOL_DESCRIPTION = f"Can be used to execute a plan of API calls,
|
||||
API_ORCHESTRATOR_PROMPT = """You are an agent that assists with user queries against API, things like querying information or creating resources.
|
||||
Some user queries can be resolved in a single API call, particularly if you can find appropriate params from the OpenAPI spec; though some require several API call.
|
||||
You should always plan your API calls first, and then execute the plan second.
|
||||
If the plan includes a DELETE call, be sure to ask the User for authorization first unless the User has specifically asked to delete something.
|
||||
You should never return information without executing the api_controller tool.
|
||||
|
||||
|
||||
@@ -165,7 +145,7 @@ REQUESTS_POST_TOOL_DESCRIPTION = """Use this when you want to POST to a website.
|
||||
Input to the tool should be a json string with 3 keys: "url", "data", and "output_instructions".
|
||||
The value of "url" should be a string.
|
||||
The value of "data" should be a dictionary of key-value pairs you want to POST to the url.
|
||||
The value of "output_instructions" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the POST request creates.
|
||||
The value of "summary_instructions" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the POST request creates.
|
||||
Always use double quotes for strings in the json string."""
|
||||
|
||||
PARSING_POST_PROMPT = PromptTemplate(
|
||||
@@ -177,37 +157,3 @@ If the response indicates an error, you should instead output a summary of the e
|
||||
Output:""",
|
||||
input_variables=["response", "instructions"],
|
||||
)
|
||||
|
||||
REQUESTS_PATCH_TOOL_DESCRIPTION = """Use this when you want to PATCH content on a website.
|
||||
Input to the tool should be a json string with 3 keys: "url", "data", and "output_instructions".
|
||||
The value of "url" should be a string.
|
||||
The value of "data" should be a dictionary of key-value pairs of the body params available in the OpenAPI spec you want to PATCH the content with at the url.
|
||||
The value of "output_instructions" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the PATCH request creates.
|
||||
Always use double quotes for strings in the json string."""
|
||||
|
||||
PARSING_PATCH_PROMPT = PromptTemplate(
|
||||
template="""Here is an API response:\n\n{response}\n\n====
|
||||
Your task is to extract some information according to these instructions: {instructions}
|
||||
When working with API objects, you should usually use ids over names. Do not return any ids or names that are not in the response.
|
||||
If the response indicates an error, you should instead output a summary of the error.
|
||||
|
||||
Output:""",
|
||||
input_variables=["response", "instructions"],
|
||||
)
|
||||
|
||||
REQUESTS_DELETE_TOOL_DESCRIPTION = """ONLY USE THIS TOOL WHEN THE USER HAS SPECIFICALLY REQUESTED TO DELETE CONTENT FROM A WEBSITE.
|
||||
Input to the tool should be a json string with 2 keys: "url", and "output_instructions".
|
||||
The value of "url" should be a string.
|
||||
The value of "output_instructions" should be instructions on what information to extract from the response, for example the id(s) for a resource(s) that the DELETE request creates.
|
||||
Always use double quotes for strings in the json string.
|
||||
ONLY USE THIS TOOL IF THE USER HAS SPECIFICALLY REQUESTED TO DELETE SOMETHING."""
|
||||
|
||||
PARSING_DELETE_PROMPT = PromptTemplate(
|
||||
template="""Here is an API response:\n\n{response}\n\n====
|
||||
Your task is to extract some information according to these instructions: {instructions}
|
||||
When working with API objects, you should usually use ids over names. Do not return any ids or names that are not in the response.
|
||||
If the response indicates an error, you should instead output a summary of the error.
|
||||
|
||||
Output:""",
|
||||
input_variables=["response", "instructions"],
|
||||
)
|
||||
|
||||
@@ -20,7 +20,6 @@ def create_pandas_dataframe_agent(
|
||||
verbose: bool = False,
|
||||
return_intermediate_steps: bool = False,
|
||||
max_iterations: Optional[int] = 15,
|
||||
max_execution_time: Optional[float] = None,
|
||||
early_stopping_method: str = "force",
|
||||
**kwargs: Any,
|
||||
) -> AgentExecutor:
|
||||
@@ -49,6 +48,5 @@ def create_pandas_dataframe_agent(
|
||||
verbose=verbose,
|
||||
return_intermediate_steps=return_intermediate_steps,
|
||||
max_iterations=max_iterations,
|
||||
max_execution_time=max_execution_time,
|
||||
early_stopping_method=early_stopping_method,
|
||||
)
|
||||
|
||||
@@ -21,7 +21,6 @@ def create_sql_agent(
|
||||
input_variables: Optional[List[str]] = None,
|
||||
top_k: int = 10,
|
||||
max_iterations: Optional[int] = 15,
|
||||
max_execution_time: Optional[float] = None,
|
||||
early_stopping_method: str = "force",
|
||||
verbose: bool = False,
|
||||
**kwargs: Any,
|
||||
@@ -48,6 +47,5 @@ def create_sql_agent(
|
||||
tools=tools,
|
||||
verbose=verbose,
|
||||
max_iterations=max_iterations,
|
||||
max_execution_time=max_execution_time,
|
||||
early_stopping_method=early_stopping_method,
|
||||
)
|
||||
|
||||
@@ -8,4 +8,3 @@ class AgentType(str, Enum):
|
||||
CONVERSATIONAL_REACT_DESCRIPTION = "conversational-react-description"
|
||||
CHAT_ZERO_SHOT_REACT_DESCRIPTION = "chat-zero-shot-react-description"
|
||||
CHAT_CONVERSATIONAL_REACT_DESCRIPTION = "chat-conversational-react-description"
|
||||
CHAT_ZERO_SHOT_REACT_DESCRIPTION_V2 = "chat-zero-shot-react-description-002"
|
||||
|
||||
@@ -1,53 +0,0 @@
|
||||
from typing import Any, List, Optional, Sequence
|
||||
|
||||
from langchain.agents.agent import AgentOutputParser, LLMSingleActionAgent
|
||||
from langchain.agents.chat_v2.prompt import (
|
||||
FORMAT_INSTRUCTIONS,
|
||||
PREFIX,
|
||||
SUFFIX,
|
||||
ChatOutputParser,
|
||||
create_prompt,
|
||||
)
|
||||
from langchain.callbacks.base import BaseCallbackManager
|
||||
from langchain.chains.llm import LLMChain
|
||||
from langchain.schema import BaseLanguageModel
|
||||
from langchain.tools import BaseTool
|
||||
|
||||
|
||||
class ChatAgentV2(LLMSingleActionAgent):
|
||||
@classmethod
|
||||
def from_llm_and_tools(
|
||||
cls,
|
||||
llm: BaseLanguageModel,
|
||||
tools: Sequence[BaseTool],
|
||||
callback_manager: Optional[BaseCallbackManager] = None,
|
||||
prefix: str = PREFIX,
|
||||
suffix: str = SUFFIX,
|
||||
format_instructions: str = FORMAT_INSTRUCTIONS,
|
||||
input_variables: Optional[List[str]] = None,
|
||||
output_parser: Optional[AgentOutputParser] = None,
|
||||
stop: Optional[List[str]] = None,
|
||||
**kwargs: Any,
|
||||
) -> LLMSingleActionAgent:
|
||||
"""Construct an agent from an LLM and tools."""
|
||||
_stop = stop or ["Observation:"]
|
||||
_output_parser = output_parser or ChatOutputParser()
|
||||
prompt = create_prompt(
|
||||
tools,
|
||||
prefix=prefix,
|
||||
suffix=suffix,
|
||||
format_instructions=format_instructions,
|
||||
input_variables=input_variables,
|
||||
)
|
||||
llm_chain = LLMChain(
|
||||
llm=llm,
|
||||
prompt=prompt,
|
||||
callback_manager=callback_manager,
|
||||
)
|
||||
return cls(
|
||||
llm_chain=llm_chain, output_parser=_output_parser, stop=_stop, **kwargs
|
||||
)
|
||||
|
||||
@property
|
||||
def _agent_type(self) -> str:
|
||||
raise ValueError
|
||||
@@ -1,84 +0,0 @@
|
||||
# flake8: noqa
|
||||
import json
|
||||
from langchain.prompts.chat import (
|
||||
HumanMessagePromptTemplate,
|
||||
SystemMessagePromptTemplate,
|
||||
)
|
||||
from langchain.agents.schema import AgentScratchPadChatPromptTemplate
|
||||
from langchain.prompts.base import BasePromptTemplate
|
||||
from langchain.schema import AgentAction, AgentFinish
|
||||
from langchain.tools.base import BaseTool
|
||||
from typing import Sequence, Optional, List, Union
|
||||
from langchain.agents.agent import AgentOutputParser
|
||||
|
||||
PREFIX = """Answer the following questions as best you can. You have access to the following tools:"""
|
||||
FORMAT_INSTRUCTIONS = """The way you use the tools is by specifying a json blob.
|
||||
Specifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).
|
||||
|
||||
The only values that should be in the "action" field are: {tool_names}
|
||||
|
||||
The $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:
|
||||
|
||||
```
|
||||
{{{{
|
||||
"action": $TOOL_NAME,
|
||||
"action_input": $INPUT
|
||||
}}}}
|
||||
```
|
||||
|
||||
ALWAYS use the following format:
|
||||
|
||||
Question: the input question you must answer
|
||||
Thought: you should always think about what to do
|
||||
Action:
|
||||
```
|
||||
$JSON_BLOB
|
||||
```
|
||||
Observation: the result of the action
|
||||
... (this Thought/Action/Observation can repeat N times)
|
||||
Thought: I now know the final answer
|
||||
Final Answer: the final answer to the original input question"""
|
||||
SUFFIX = """Begin! Reminder to always use the exact characters `Final Answer` when responding."""
|
||||
|
||||
|
||||
def create_prompt(
|
||||
tools: Sequence[BaseTool],
|
||||
prefix: str = PREFIX,
|
||||
suffix: str = SUFFIX,
|
||||
format_instructions: str = FORMAT_INSTRUCTIONS,
|
||||
input_variables: Optional[List[str]] = None,
|
||||
) -> BasePromptTemplate:
|
||||
tool_strings = "\n".join([f"{tool.name}: {tool.description}" for tool in tools])
|
||||
tool_names = ", ".join([tool.name for tool in tools])
|
||||
format_instructions = format_instructions.format(tool_names=tool_names)
|
||||
template = "\n\n".join([prefix, tool_strings, format_instructions, suffix])
|
||||
messages = [
|
||||
SystemMessagePromptTemplate.from_template(template),
|
||||
HumanMessagePromptTemplate.from_template("{input}\n\n{agent_scratchpad}"),
|
||||
]
|
||||
if input_variables is None:
|
||||
input_variables = ["input", "intermediate_steps"]
|
||||
return AgentScratchPadChatPromptTemplate(
|
||||
input_variables=input_variables, messages=messages
|
||||
)
|
||||
|
||||
|
||||
class ChatOutputParser(AgentOutputParser):
|
||||
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
|
||||
if "Final Answer:" in text:
|
||||
return AgentFinish(
|
||||
# Return values is generally always a dictionary with a single `output` key
|
||||
# It is not recommended to try anything else at the moment :)
|
||||
return_values={"output": text.split("Final Answer:")[-1].strip()},
|
||||
log=text,
|
||||
)
|
||||
try:
|
||||
_, action, _ = text.split("```")
|
||||
response = json.loads(action.strip())
|
||||
agent_action = AgentAction(
|
||||
tool=response["action"], tool_input=response["action_input"], log=text
|
||||
)
|
||||
return agent_action
|
||||
|
||||
except Exception:
|
||||
raise ValueError(f"Could not parse LLM output: {text}")
|
||||
@@ -5,10 +5,9 @@ from typing import Any, List, Optional, Union
|
||||
|
||||
import yaml
|
||||
|
||||
from langchain.agents.agent import BaseSingleActionAgent
|
||||
from langchain.agents.agent import Agent
|
||||
from langchain.agents.agent_types import AgentType
|
||||
from langchain.agents.chat.base import ChatAgent
|
||||
from langchain.agents.chat_v2.base import ChatAgentV2
|
||||
from langchain.agents.conversational.base import ConversationalAgent
|
||||
from langchain.agents.conversational_chat.base import ConversationalChatAgent
|
||||
from langchain.agents.mrkl.base import ZeroShotAgent
|
||||
@@ -26,7 +25,6 @@ AGENT_TO_CLASS = {
|
||||
AgentType.CONVERSATIONAL_REACT_DESCRIPTION: ConversationalAgent,
|
||||
AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION: ChatAgent,
|
||||
AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION: ConversationalChatAgent,
|
||||
AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION_V2: ChatAgentV2,
|
||||
}
|
||||
|
||||
URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/agents/"
|
||||
@@ -34,7 +32,7 @@ URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/age
|
||||
|
||||
def _load_agent_from_tools(
|
||||
config: dict, llm: BaseLLM, tools: List[Tool], **kwargs: Any
|
||||
) -> BaseSingleActionAgent:
|
||||
) -> Agent:
|
||||
config_type = config.pop("_type")
|
||||
if config_type not in AGENT_TO_CLASS:
|
||||
raise ValueError(f"Loading {config_type} agent not supported")
|
||||
@@ -51,7 +49,7 @@ def load_agent_from_config(
|
||||
llm: Optional[BaseLLM] = None,
|
||||
tools: Optional[List[Tool]] = None,
|
||||
**kwargs: Any,
|
||||
) -> BaseSingleActionAgent:
|
||||
) -> Agent:
|
||||
"""Load agent from Config Dict."""
|
||||
if "_type" not in config:
|
||||
raise ValueError("Must specify an agent Type in config")
|
||||
@@ -84,7 +82,7 @@ def load_agent_from_config(
|
||||
return agent_cls(**combined_config) # type: ignore
|
||||
|
||||
|
||||
def load_agent(path: Union[str, Path], **kwargs: Any) -> BaseSingleActionAgent:
|
||||
def load_agent(path: Union[str, Path], **kwargs: Any) -> Agent:
|
||||
"""Unified method for loading a agent from LangChainHub or local fs."""
|
||||
if hub_result := try_load_from_hub(
|
||||
path, _load_agent_from_file, "agents", {"json", "yaml"}
|
||||
@@ -94,9 +92,7 @@ def load_agent(path: Union[str, Path], **kwargs: Any) -> BaseSingleActionAgent:
|
||||
return _load_agent_from_file(path, **kwargs)
|
||||
|
||||
|
||||
def _load_agent_from_file(
|
||||
file: Union[str, Path], **kwargs: Any
|
||||
) -> BaseSingleActionAgent:
|
||||
def _load_agent_from_file(file: Union[str, Path], **kwargs: Any) -> Agent:
|
||||
"""Load agent from file."""
|
||||
# Convert file to Path object.
|
||||
if isinstance(file, str):
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
from typing import Any, Dict, List, Tuple
|
||||
|
||||
from langchain.prompts.chat import ChatPromptTemplate
|
||||
from langchain.schema import AgentAction
|
||||
|
||||
|
||||
class AgentScratchPadChatPromptTemplate(ChatPromptTemplate):
|
||||
def _construct_agent_scratchpad(
|
||||
self, intermediate_steps: List[Tuple[AgentAction, str]]
|
||||
) -> str:
|
||||
if len(intermediate_steps) == 0:
|
||||
return ""
|
||||
thoughts = ""
|
||||
for action, observation in intermediate_steps:
|
||||
thoughts += action.log
|
||||
thoughts += f"\nObservation: {observation}\nThought: "
|
||||
return (
|
||||
f"This was your previous work "
|
||||
f"(but I haven't seen any of it! I only see what "
|
||||
f"you return as final answer):\n{thoughts}"
|
||||
)
|
||||
|
||||
def _merge_partial_and_user_variables(self, **kwargs: Any) -> Dict[str, Any]:
|
||||
intermediate_steps = kwargs.pop("intermediate_steps")
|
||||
kwargs["agent_scratchpad"] = self._construct_agent_scratchpad(
|
||||
intermediate_steps
|
||||
)
|
||||
return kwargs
|
||||
@@ -1,7 +1,6 @@
|
||||
"""Beta Feature: base interface for cache."""
|
||||
import json
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any, Callable, Dict, List, Optional, Tuple
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
|
||||
from sqlalchemy import Column, Integer, String, create_engine, select
|
||||
from sqlalchemy.engine.base import Engine
|
||||
@@ -138,125 +137,3 @@ class RedisCache(BaseCache):
|
||||
"""Update cache based on prompt and llm_string."""
|
||||
for i, generation in enumerate(return_val):
|
||||
self.redis.set(self._key(prompt, llm_string, i), generation.text)
|
||||
|
||||
|
||||
class GPTCache(BaseCache):
|
||||
"""Cache that uses GPTCache as a backend."""
|
||||
|
||||
def __init__(self, init_func: Callable[[Any], None]):
|
||||
"""Initialize by passing in the `init` GPTCache func
|
||||
|
||||
Args:
|
||||
init_func (Callable[[Any], None]): init `GPTCache` function
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
import gptcache
|
||||
from gptcache.processor.pre import get_prompt
|
||||
from gptcache.manager.factory import get_data_manager
|
||||
|
||||
# Avoid multiple caches using the same file,
|
||||
causing different llm model caches to affect each other
|
||||
i = 0
|
||||
file_prefix = "data_map"
|
||||
|
||||
def init_gptcache_map(cache_obj: gptcache.Cache):
|
||||
nonlocal i
|
||||
cache_path = f'{file_prefix}_{i}.txt'
|
||||
cache_obj.init(
|
||||
pre_embedding_func=get_prompt,
|
||||
data_manager=get_data_manager(data_path=cache_path),
|
||||
)
|
||||
i += 1
|
||||
|
||||
langchain.llm_cache = GPTCache(init_gptcache_map)
|
||||
|
||||
"""
|
||||
try:
|
||||
import gptcache # noqa: F401
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import gptcache python package. "
|
||||
"Please install it with `pip install gptcache`."
|
||||
)
|
||||
self.init_gptcache_func: Callable[[Any], None] = init_func
|
||||
self.gptcache_dict: Dict[str, Any] = {}
|
||||
|
||||
@staticmethod
|
||||
def _update_cache_callback_none(*_: Any, **__: Any) -> None:
|
||||
"""When updating cached data, do nothing.
|
||||
|
||||
Because currently only cached queries are processed."""
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _llm_handle_none(*_: Any, **__: Any) -> None:
|
||||
"""Do nothing on a cache miss"""
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def _cache_data_converter(data: str) -> RETURN_VAL_TYPE:
|
||||
"""Convert the `data` in the cache to the `RETURN_VAL_TYPE` data format."""
|
||||
return [Generation(**generation_dict) for generation_dict in json.loads(data)]
|
||||
|
||||
def _get_gptcache(self, llm_string: str) -> Any:
|
||||
"""Get a cache object.
|
||||
|
||||
When the corresponding llm model cache does not exist, it will be created."""
|
||||
from gptcache import Cache
|
||||
|
||||
_gptcache = self.gptcache_dict.get(llm_string, None)
|
||||
if _gptcache is None:
|
||||
_gptcache = Cache()
|
||||
self.init_gptcache_func(_gptcache)
|
||||
self.gptcache_dict[llm_string] = _gptcache
|
||||
return _gptcache
|
||||
|
||||
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
|
||||
"""Look up the cache data.
|
||||
First, retrieve the corresponding cache object using the `llm_string` parameter,
|
||||
and then retrieve the data from the cache based on the `prompt`.
|
||||
"""
|
||||
from gptcache.adapter.adapter import adapt
|
||||
|
||||
_gptcache = self.gptcache_dict.get(llm_string)
|
||||
if _gptcache is None:
|
||||
return None
|
||||
res = adapt(
|
||||
GPTCache._llm_handle_none,
|
||||
GPTCache._cache_data_converter,
|
||||
GPTCache._update_cache_callback_none,
|
||||
cache_obj=_gptcache,
|
||||
prompt=prompt,
|
||||
)
|
||||
return res
|
||||
|
||||
@staticmethod
|
||||
def _update_cache_callback(
|
||||
llm_data: RETURN_VAL_TYPE, update_cache_func: Callable[[Any], None]
|
||||
) -> None:
|
||||
"""Save the `llm_data` to cache storage"""
|
||||
handled_data = json.dumps([generation.dict() for generation in llm_data])
|
||||
update_cache_func(handled_data)
|
||||
|
||||
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
|
||||
"""Update cache.
|
||||
First, retrieve the corresponding cache object using the `llm_string` parameter,
|
||||
and then store the `prompt` and `return_val` in the cache object.
|
||||
"""
|
||||
from gptcache.adapter.adapter import adapt
|
||||
|
||||
_gptcache = self._get_gptcache(llm_string)
|
||||
|
||||
def llm_handle(*_: Any, **__: Any) -> RETURN_VAL_TYPE:
|
||||
return return_val
|
||||
|
||||
return adapt(
|
||||
llm_handle,
|
||||
GPTCache._cache_data_converter,
|
||||
GPTCache._update_cache_callback,
|
||||
cache_obj=_gptcache,
|
||||
cache_skip=True,
|
||||
prompt=prompt,
|
||||
)
|
||||
|
||||
@@ -11,7 +11,6 @@ from langchain.callbacks.base import (
|
||||
CallbackManager,
|
||||
)
|
||||
from langchain.callbacks.clearml_callback import ClearMLCallbackHandler
|
||||
from langchain.callbacks.comet_ml_callback import CometCallbackHandler
|
||||
from langchain.callbacks.openai_info import OpenAICallbackHandler
|
||||
from langchain.callbacks.shared import SharedCallbackManager
|
||||
from langchain.callbacks.stdout import StdOutCallbackHandler
|
||||
@@ -79,7 +78,6 @@ __all__ = [
|
||||
"AimCallbackHandler",
|
||||
"WandbCallbackHandler",
|
||||
"ClearMLCallbackHandler",
|
||||
"CometCallbackHandler",
|
||||
"AsyncIteratorCallbackHandler",
|
||||
"get_openai_callback",
|
||||
"set_tracing_callback_manager",
|
||||
|
||||
@@ -1,627 +0,0 @@
|
||||
import tempfile
|
||||
from copy import deepcopy
|
||||
from pathlib import Path
|
||||
from typing import Any, Callable, Dict, List, Optional, Sequence, Union
|
||||
|
||||
import langchain
|
||||
from langchain.callbacks.base import BaseCallbackHandler
|
||||
from langchain.callbacks.utils import (
|
||||
BaseMetadataCallbackHandler,
|
||||
flatten_dict,
|
||||
import_pandas,
|
||||
import_spacy,
|
||||
import_textstat,
|
||||
)
|
||||
from langchain.schema import AgentAction, AgentFinish, Generation, LLMResult
|
||||
|
||||
LANGCHAIN_MODEL_NAME = "langchain-model"
|
||||
|
||||
|
||||
def import_comet_ml() -> Any:
|
||||
try:
|
||||
import comet_ml # noqa: F401
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"To use the comet_ml callback manager you need to have the "
|
||||
"`comet_ml` python package installed. Please install it with"
|
||||
" `pip install comet_ml`"
|
||||
)
|
||||
return comet_ml
|
||||
|
||||
|
||||
def _get_experiment(
|
||||
workspace: Optional[str] = None, project_name: Optional[str] = None
|
||||
) -> Any:
|
||||
comet_ml = import_comet_ml()
|
||||
|
||||
experiment = comet_ml.config.get_global_experiment()
|
||||
if experiment is None:
|
||||
experiment = comet_ml.Experiment( # type: ignore
|
||||
workspace=workspace,
|
||||
project_name=project_name,
|
||||
)
|
||||
|
||||
return experiment
|
||||
|
||||
|
||||
def _fetch_text_complexity_metrics(text: str) -> dict:
|
||||
textstat = import_textstat()
|
||||
text_complexity_metrics = {
|
||||
"flesch_reading_ease": textstat.flesch_reading_ease(text),
|
||||
"flesch_kincaid_grade": textstat.flesch_kincaid_grade(text),
|
||||
"smog_index": textstat.smog_index(text),
|
||||
"coleman_liau_index": textstat.coleman_liau_index(text),
|
||||
"automated_readability_index": textstat.automated_readability_index(text),
|
||||
"dale_chall_readability_score": textstat.dale_chall_readability_score(text),
|
||||
"difficult_words": textstat.difficult_words(text),
|
||||
"linsear_write_formula": textstat.linsear_write_formula(text),
|
||||
"gunning_fog": textstat.gunning_fog(text),
|
||||
"text_standard": textstat.text_standard(text),
|
||||
"fernandez_huerta": textstat.fernandez_huerta(text),
|
||||
"szigriszt_pazos": textstat.szigriszt_pazos(text),
|
||||
"gutierrez_polini": textstat.gutierrez_polini(text),
|
||||
"crawford": textstat.crawford(text),
|
||||
"gulpease_index": textstat.gulpease_index(text),
|
||||
"osman": textstat.osman(text),
|
||||
}
|
||||
return text_complexity_metrics
|
||||
|
||||
|
||||
def _summarize_metrics_for_generated_outputs(metrics: Sequence) -> dict:
|
||||
pd = import_pandas()
|
||||
metrics_df = pd.DataFrame(metrics)
|
||||
metrics_summary = metrics_df.describe()
|
||||
|
||||
return metrics_summary.to_dict()
|
||||
|
||||
|
||||
class CometCallbackHandler(BaseMetadataCallbackHandler, BaseCallbackHandler):
|
||||
"""Callback Handler that logs to Comet.
|
||||
|
||||
Parameters:
|
||||
job_type (str): The type of comet_ml task such as "inference",
|
||||
"testing" or "qc"
|
||||
project_name (str): The comet_ml project name
|
||||
tags (list): Tags to add to the task
|
||||
task_name (str): Name of the comet_ml task
|
||||
visualize (bool): Whether to visualize the run.
|
||||
complexity_metrics (bool): Whether to log complexity metrics
|
||||
stream_logs (bool): Whether to stream callback actions to Comet
|
||||
|
||||
This handler will utilize the associated callback method and formats
|
||||
the input of each callback function with metadata regarding the state of LLM run,
|
||||
and adds the response to the list of records for both the {method}_records and
|
||||
action. It then logs the response to Comet.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
task_type: Optional[str] = "inference",
|
||||
workspace: Optional[str] = None,
|
||||
project_name: Optional[str] = "comet-langchain-demo",
|
||||
tags: Optional[Sequence] = None,
|
||||
name: Optional[str] = None,
|
||||
visualizations: Optional[List[str]] = None,
|
||||
complexity_metrics: bool = False,
|
||||
custom_metrics: Optional[Callable] = None,
|
||||
stream_logs: bool = True,
|
||||
) -> None:
|
||||
"""Initialize callback handler."""
|
||||
|
||||
comet_ml = import_comet_ml()
|
||||
super().__init__()
|
||||
|
||||
self.task_type = task_type
|
||||
self.workspace = workspace
|
||||
self.project_name = project_name
|
||||
self.tags = tags
|
||||
self.visualizations = visualizations
|
||||
self.complexity_metrics = complexity_metrics
|
||||
self.custom_metrics = custom_metrics
|
||||
self.stream_logs = stream_logs
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
|
||||
self.experiment = _get_experiment(workspace, project_name)
|
||||
self.experiment.log_other("Created from", "langchain")
|
||||
if tags:
|
||||
self.experiment.add_tags(tags)
|
||||
self.name = name
|
||||
if self.name:
|
||||
self.experiment.set_name(self.name)
|
||||
|
||||
warning = (
|
||||
"The comet_ml callback is currently in beta and is subject to change "
|
||||
"based on updates to `langchain`. Please report any issues to "
|
||||
"https://github.com/comet_ml/issue_tracking/issues with the tag "
|
||||
"`langchain`."
|
||||
)
|
||||
comet_ml.LOGGER.warning(warning)
|
||||
|
||||
self.callback_columns: list = []
|
||||
self.action_records: list = []
|
||||
self.complexity_metrics = complexity_metrics
|
||||
if self.visualizations:
|
||||
spacy = import_spacy()
|
||||
self.nlp = spacy.load("en_core_web_sm")
|
||||
else:
|
||||
self.nlp = None
|
||||
|
||||
def _init_resp(self) -> Dict:
|
||||
return {k: None for k in self.callback_columns}
|
||||
|
||||
def on_llm_start(
|
||||
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
|
||||
) -> None:
|
||||
"""Run when LLM starts."""
|
||||
self.step += 1
|
||||
self.llm_starts += 1
|
||||
self.starts += 1
|
||||
|
||||
metadata = self._init_resp()
|
||||
metadata.update({"action": "on_llm_start"})
|
||||
metadata.update(flatten_dict(serialized))
|
||||
metadata.update(self.get_custom_callback_meta())
|
||||
|
||||
for prompt in prompts:
|
||||
prompt_resp = deepcopy(metadata)
|
||||
prompt_resp["prompts"] = prompt
|
||||
self.on_llm_start_records.append(prompt_resp)
|
||||
self.action_records.append(prompt_resp)
|
||||
|
||||
if self.stream_logs:
|
||||
self._log_stream(prompt, metadata, self.step)
|
||||
|
||||
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
|
||||
"""Run when LLM generates a new token."""
|
||||
self.step += 1
|
||||
self.llm_streams += 1
|
||||
|
||||
resp = self._init_resp()
|
||||
resp.update({"action": "on_llm_new_token", "token": token})
|
||||
resp.update(self.get_custom_callback_meta())
|
||||
|
||||
self.action_records.append(resp)
|
||||
|
||||
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
|
||||
"""Run when LLM ends running."""
|
||||
self.step += 1
|
||||
self.llm_ends += 1
|
||||
self.ends += 1
|
||||
|
||||
metadata = self._init_resp()
|
||||
metadata.update({"action": "on_llm_end"})
|
||||
metadata.update(flatten_dict(response.llm_output or {}))
|
||||
metadata.update(self.get_custom_callback_meta())
|
||||
|
||||
output_complexity_metrics = []
|
||||
output_custom_metrics = []
|
||||
|
||||
for prompt_idx, generations in enumerate(response.generations):
|
||||
for gen_idx, generation in enumerate(generations):
|
||||
text = generation.text
|
||||
|
||||
generation_resp = deepcopy(metadata)
|
||||
generation_resp.update(flatten_dict(generation.dict()))
|
||||
|
||||
complexity_metrics = self._get_complexity_metrics(text)
|
||||
if complexity_metrics:
|
||||
output_complexity_metrics.append(complexity_metrics)
|
||||
generation_resp.update(complexity_metrics)
|
||||
|
||||
custom_metrics = self._get_custom_metrics(
|
||||
generation, prompt_idx, gen_idx
|
||||
)
|
||||
if custom_metrics:
|
||||
output_custom_metrics.append(custom_metrics)
|
||||
generation_resp.update(custom_metrics)
|
||||
|
||||
if self.stream_logs:
|
||||
self._log_stream(text, metadata, self.step)
|
||||
|
||||
self.action_records.append(generation_resp)
|
||||
self.on_llm_end_records.append(generation_resp)
|
||||
|
||||
self._log_text_metrics(output_complexity_metrics, step=self.step)
|
||||
self._log_text_metrics(output_custom_metrics, step=self.step)
|
||||
|
||||
def on_llm_error(
|
||||
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
|
||||
) -> None:
|
||||
"""Run when LLM errors."""
|
||||
self.step += 1
|
||||
self.errors += 1
|
||||
|
||||
def on_chain_start(
|
||||
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
|
||||
) -> None:
|
||||
"""Run when chain starts running."""
|
||||
self.step += 1
|
||||
self.chain_starts += 1
|
||||
self.starts += 1
|
||||
|
||||
resp = self._init_resp()
|
||||
resp.update({"action": "on_chain_start"})
|
||||
resp.update(flatten_dict(serialized))
|
||||
resp.update(self.get_custom_callback_meta())
|
||||
|
||||
comet_ml = import_comet_ml()
|
||||
|
||||
for chain_input_key, chain_input_val in inputs.items():
|
||||
if isinstance(chain_input_val, str):
|
||||
input_resp = deepcopy(resp)
|
||||
if self.stream_logs:
|
||||
self._log_stream(chain_input_val, resp, self.step)
|
||||
input_resp.update({chain_input_key: chain_input_val})
|
||||
self.action_records.append(input_resp)
|
||||
|
||||
else:
|
||||
comet_ml.LOGGER.warning(
|
||||
f"Unexpected data format provided! "
|
||||
f"Input Value for {chain_input_key} will not be logged"
|
||||
)
|
||||
|
||||
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
|
||||
"""Run when chain ends running."""
|
||||
self.step += 1
|
||||
self.chain_ends += 1
|
||||
self.ends += 1
|
||||
|
||||
resp = self._init_resp()
|
||||
resp.update({"action": "on_chain_end"})
|
||||
resp.update(self.get_custom_callback_meta())
|
||||
|
||||
comet_ml = import_comet_ml()
|
||||
|
||||
for chain_output_key, chain_output_val in outputs.items():
|
||||
if isinstance(chain_output_val, str):
|
||||
output_resp = deepcopy(resp)
|
||||
if self.stream_logs:
|
||||
self._log_stream(chain_output_val, resp, self.step)
|
||||
output_resp.update({chain_output_key: chain_output_val})
|
||||
self.action_records.append(output_resp)
|
||||
else:
|
||||
comet_ml.LOGGER.warning(
|
||||
f"Unexpected data format provided! "
|
||||
f"Output Value for {chain_output_key} will not be logged"
|
||||
)
|
||||
|
||||
def on_chain_error(
|
||||
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
|
||||
) -> None:
|
||||
"""Run when chain errors."""
|
||||
self.step += 1
|
||||
self.errors += 1
|
||||
|
||||
def on_tool_start(
|
||||
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
|
||||
) -> None:
|
||||
"""Run when tool starts running."""
|
||||
self.step += 1
|
||||
self.tool_starts += 1
|
||||
self.starts += 1
|
||||
|
||||
resp = self._init_resp()
|
||||
resp.update({"action": "on_tool_start"})
|
||||
resp.update(flatten_dict(serialized))
|
||||
resp.update(self.get_custom_callback_meta())
|
||||
if self.stream_logs:
|
||||
self._log_stream(input_str, resp, self.step)
|
||||
|
||||
resp.update({"input_str": input_str})
|
||||
self.action_records.append(resp)
|
||||
|
||||
def on_tool_end(self, output: str, **kwargs: Any) -> None:
|
||||
"""Run when tool ends running."""
|
||||
self.step += 1
|
||||
self.tool_ends += 1
|
||||
self.ends += 1
|
||||
|
||||
resp = self._init_resp()
|
||||
resp.update({"action": "on_tool_end"})
|
||||
resp.update(self.get_custom_callback_meta())
|
||||
if self.stream_logs:
|
||||
self._log_stream(output, resp, self.step)
|
||||
|
||||
resp.update({"output": output})
|
||||
self.action_records.append(resp)
|
||||
|
||||
def on_tool_error(
|
||||
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
|
||||
) -> None:
|
||||
"""Run when tool errors."""
|
||||
self.step += 1
|
||||
self.errors += 1
|
||||
|
||||
def on_text(self, text: str, **kwargs: Any) -> None:
|
||||
"""
|
||||
Run when agent is ending.
|
||||
"""
|
||||
self.step += 1
|
||||
self.text_ctr += 1
|
||||
|
||||
resp = self._init_resp()
|
||||
resp.update({"action": "on_text"})
|
||||
resp.update(self.get_custom_callback_meta())
|
||||
if self.stream_logs:
|
||||
self._log_stream(text, resp, self.step)
|
||||
|
||||
resp.update({"text": text})
|
||||
self.action_records.append(resp)
|
||||
|
||||
def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> None:
|
||||
"""Run when agent ends running."""
|
||||
self.step += 1
|
||||
self.agent_ends += 1
|
||||
self.ends += 1
|
||||
|
||||
resp = self._init_resp()
|
||||
output = finish.return_values["output"]
|
||||
log = finish.log
|
||||
|
||||
resp.update({"action": "on_agent_finish", "log": log})
|
||||
resp.update(self.get_custom_callback_meta())
|
||||
if self.stream_logs:
|
||||
self._log_stream(output, resp, self.step)
|
||||
|
||||
resp.update({"output": output})
|
||||
self.action_records.append(resp)
|
||||
|
||||
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
|
||||
"""Run on agent action."""
|
||||
self.step += 1
|
||||
self.tool_starts += 1
|
||||
self.starts += 1
|
||||
|
||||
tool = action.tool
|
||||
tool_input = action.tool_input
|
||||
log = action.log
|
||||
|
||||
resp = self._init_resp()
|
||||
resp.update({"action": "on_agent_action", "log": log, "tool": tool})
|
||||
resp.update(self.get_custom_callback_meta())
|
||||
if self.stream_logs:
|
||||
self._log_stream(tool_input, resp, self.step)
|
||||
|
||||
resp.update({"tool_input": tool_input})
|
||||
self.action_records.append(resp)
|
||||
|
||||
def _get_complexity_metrics(self, text: str) -> dict:
|
||||
"""Compute text complexity metrics using textstat.
|
||||
|
||||
Parameters:
|
||||
text (str): The text to analyze.
|
||||
|
||||
Returns:
|
||||
(dict): A dictionary containing the complexity metrics.
|
||||
"""
|
||||
resp = {}
|
||||
if self.complexity_metrics:
|
||||
text_complexity_metrics = _fetch_text_complexity_metrics(text)
|
||||
resp.update(text_complexity_metrics)
|
||||
|
||||
return resp
|
||||
|
||||
def _get_custom_metrics(
|
||||
self, generation: Generation, prompt_idx: int, gen_idx: int
|
||||
) -> dict:
|
||||
"""Compute Custom Metrics for an LLM Generated Output
|
||||
|
||||
Args:
|
||||
generation (LLMResult): Output generation from an LLM
|
||||
prompt_idx (int): List index of the input prompt
|
||||
gen_idx (int): List index of the generated output
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing the custom metrics.
|
||||
"""
|
||||
|
||||
resp = {}
|
||||
if self.custom_metrics:
|
||||
custom_metrics = self.custom_metrics(generation, prompt_idx, gen_idx)
|
||||
resp.update(custom_metrics)
|
||||
|
||||
return resp
|
||||
|
||||
def flush_tracker(
|
||||
self,
|
||||
langchain_asset: Any = None,
|
||||
task_type: Optional[str] = "inference",
|
||||
workspace: Optional[str] = None,
|
||||
project_name: Optional[str] = "comet-langchain-demo",
|
||||
tags: Optional[Sequence] = None,
|
||||
name: Optional[str] = None,
|
||||
visualizations: Optional[List[str]] = None,
|
||||
complexity_metrics: bool = False,
|
||||
custom_metrics: Optional[Callable] = None,
|
||||
finish: bool = False,
|
||||
reset: bool = False,
|
||||
) -> None:
|
||||
"""Flush the tracker and setup the session.
|
||||
|
||||
Everything after this will be a new table.
|
||||
|
||||
Args:
|
||||
name: Name of the preformed session so far so it is identifyable
|
||||
langchain_asset: The langchain asset to save.
|
||||
finish: Whether to finish the run.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
self._log_session(langchain_asset)
|
||||
|
||||
if langchain_asset:
|
||||
self._log_model(langchain_asset)
|
||||
|
||||
if finish:
|
||||
self.experiment.end()
|
||||
|
||||
if reset:
|
||||
self._reset(
|
||||
task_type,
|
||||
workspace,
|
||||
project_name,
|
||||
tags,
|
||||
name,
|
||||
visualizations,
|
||||
complexity_metrics,
|
||||
custom_metrics,
|
||||
)
|
||||
|
||||
def _log_stream(self, prompt: str, metadata: dict, step: int) -> None:
|
||||
self.experiment.log_text(prompt, metadata=metadata, step=step)
|
||||
|
||||
def _log_model(self, langchain_asset: Any) -> None:
|
||||
comet_ml = import_comet_ml()
|
||||
|
||||
model_parameters = self._get_llm_parameters(langchain_asset)
|
||||
self.experiment.log_parameters(model_parameters, prefix="model")
|
||||
|
||||
langchain_asset_path = Path(self.temp_dir.name, "model.json")
|
||||
model_name = self.name if self.name else LANGCHAIN_MODEL_NAME
|
||||
|
||||
try:
|
||||
if hasattr(langchain_asset, "save"):
|
||||
langchain_asset.save(langchain_asset_path)
|
||||
self.experiment.log_model(model_name, str(langchain_asset_path))
|
||||
except (ValueError, AttributeError, NotImplementedError) as e:
|
||||
if hasattr(langchain_asset, "save_agent"):
|
||||
langchain_asset.save_agent(langchain_asset_path)
|
||||
self.experiment.log_model(model_name, str(langchain_asset_path))
|
||||
else:
|
||||
comet_ml.LOGGER.warning(
|
||||
f"{e}"
|
||||
" Could not save Langchain Asset "
|
||||
f"for {langchain_asset.__class__.__name__}"
|
||||
)
|
||||
|
||||
def _log_session(self, langchain_asset: Optional[Any] = None) -> None:
|
||||
llm_session_df = self._create_session_analysis_dataframe(langchain_asset)
|
||||
# Log the cleaned dataframe as a table
|
||||
self.experiment.log_table("langchain-llm-session.csv", llm_session_df)
|
||||
|
||||
metadata = {"langchain_version": str(langchain.__version__)}
|
||||
# Log the langchain low-level records as a JSON file directly
|
||||
self.experiment.log_asset_data(
|
||||
self.action_records, "langchain-action_records.json", metadata=metadata
|
||||
)
|
||||
|
||||
self._log_visualizations(llm_session_df)
|
||||
|
||||
def _log_text_metrics(self, metrics: Sequence[dict], step: int) -> None:
|
||||
if not metrics:
|
||||
return
|
||||
|
||||
metrics_summary = _summarize_metrics_for_generated_outputs(metrics)
|
||||
for key, value in metrics_summary.items():
|
||||
self.experiment.log_metrics(value, prefix=key, step=step)
|
||||
|
||||
def _log_visualizations(self, session_df: Any) -> None:
|
||||
if not (self.visualizations and self.nlp):
|
||||
return
|
||||
|
||||
spacy = import_spacy()
|
||||
comet_ml = import_comet_ml()
|
||||
|
||||
prompts = session_df["prompts"].tolist()
|
||||
outputs = session_df["text"].tolist()
|
||||
|
||||
for idx, (prompt, output) in enumerate(zip(prompts, outputs)):
|
||||
doc = self.nlp(output)
|
||||
sentence_spans = list(doc.sents)
|
||||
|
||||
for visualization in self.visualizations:
|
||||
try:
|
||||
html = spacy.displacy.render(
|
||||
sentence_spans,
|
||||
style=visualization,
|
||||
options={"compact": True},
|
||||
jupyter=False,
|
||||
page=True,
|
||||
)
|
||||
self.experiment.log_asset_data(
|
||||
html,
|
||||
name=f"langchain-viz-{visualization}-{idx}.html",
|
||||
metadata={"prompt": prompt},
|
||||
step=idx,
|
||||
)
|
||||
except Exception as e:
|
||||
comet_ml.LOGGER.warning(e)
|
||||
|
||||
return
|
||||
|
||||
def _reset(
|
||||
self,
|
||||
task_type: Optional[str] = None,
|
||||
workspace: Optional[str] = None,
|
||||
project_name: Optional[str] = None,
|
||||
tags: Optional[Sequence] = None,
|
||||
name: Optional[str] = None,
|
||||
visualizations: Optional[List[str]] = None,
|
||||
complexity_metrics: bool = False,
|
||||
custom_metrics: Optional[Callable] = None,
|
||||
) -> None:
|
||||
_task_type = task_type if task_type else self.task_type
|
||||
_workspace = workspace if workspace else self.workspace
|
||||
_project_name = project_name if project_name else self.project_name
|
||||
_tags = tags if tags else self.tags
|
||||
_name = name if name else self.name
|
||||
_visualizations = visualizations if visualizations else self.visualizations
|
||||
_complexity_metrics = (
|
||||
complexity_metrics if complexity_metrics else self.complexity_metrics
|
||||
)
|
||||
_custom_metrics = custom_metrics if custom_metrics else self.custom_metrics
|
||||
|
||||
self.__init__( # type: ignore
|
||||
task_type=_task_type,
|
||||
workspace=_workspace,
|
||||
project_name=_project_name,
|
||||
tags=_tags,
|
||||
name=_name,
|
||||
visualizations=_visualizations,
|
||||
complexity_metrics=_complexity_metrics,
|
||||
custom_metrics=_custom_metrics,
|
||||
)
|
||||
|
||||
self.reset_callback_meta()
|
||||
self.temp_dir = tempfile.TemporaryDirectory()
|
||||
|
||||
def _create_session_analysis_dataframe(self, langchain_asset: Any = None) -> dict:
|
||||
pd = import_pandas()
|
||||
|
||||
llm_parameters = self._get_llm_parameters(langchain_asset)
|
||||
num_generations_per_prompt = llm_parameters.get("n", 1)
|
||||
|
||||
llm_start_records_df = pd.DataFrame(self.on_llm_start_records)
|
||||
# Repeat each input row based on the number of outputs generated per prompt
|
||||
llm_start_records_df = llm_start_records_df.loc[
|
||||
llm_start_records_df.index.repeat(num_generations_per_prompt)
|
||||
].reset_index(drop=True)
|
||||
llm_end_records_df = pd.DataFrame(self.on_llm_end_records)
|
||||
|
||||
llm_session_df = pd.merge(
|
||||
llm_start_records_df,
|
||||
llm_end_records_df,
|
||||
left_index=True,
|
||||
right_index=True,
|
||||
suffixes=["_llm_start", "_llm_end"],
|
||||
)
|
||||
|
||||
return llm_session_df
|
||||
|
||||
def _get_llm_parameters(self, langchain_asset: Any = None) -> dict:
|
||||
if not langchain_asset:
|
||||
return {}
|
||||
try:
|
||||
if hasattr(langchain_asset, "agent"):
|
||||
llm_parameters = langchain_asset.agent.llm_chain.llm.dict()
|
||||
elif hasattr(langchain_asset, "llm_chain"):
|
||||
llm_parameters = langchain_asset.llm_chain.llm.dict()
|
||||
elif hasattr(langchain_asset, "llm"):
|
||||
llm_parameters = langchain_asset.llm.dict()
|
||||
else:
|
||||
llm_parameters = langchain_asset.dict()
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
return llm_parameters
|
||||
@@ -47,19 +47,14 @@ class AsyncIteratorCallbackHandler(AsyncCallbackHandler):
|
||||
while not self.queue.empty() or not self.done.is_set():
|
||||
# Wait for the next token in the queue,
|
||||
# but stop waiting if the done event is set
|
||||
done, other = await asyncio.wait(
|
||||
done, _ = await asyncio.wait(
|
||||
[
|
||||
# NOTE: If you add other tasks here, update the code below,
|
||||
# which assumes each set has exactly one task each
|
||||
asyncio.ensure_future(self.queue.get()),
|
||||
asyncio.ensure_future(self.done.wait()),
|
||||
],
|
||||
return_when=asyncio.FIRST_COMPLETED,
|
||||
)
|
||||
|
||||
# Cancel the other task
|
||||
other.pop().cancel()
|
||||
|
||||
# Extract the value of the first completed task
|
||||
token_or_done = cast(Union[str, Literal[True]], done.pop().result())
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ class OpenAPIEndpointChain(Chain, BaseModel):
|
||||
"""Chain interacts with an OpenAPI endpoint using natural language."""
|
||||
|
||||
api_request_chain: LLMChain
|
||||
api_response_chain: Optional[LLMChain]
|
||||
api_response_chain: LLMChain
|
||||
api_operation: APIOperation
|
||||
requests: Requests = Field(exclude=True, default_factory=Requests)
|
||||
param_mapping: _ParamMapping = Field(alias="param_mapping")
|
||||
@@ -144,18 +144,15 @@ class OpenAPIEndpointChain(Chain, BaseModel):
|
||||
self.callback_manager.on_text(
|
||||
response_text, color="blue", end="\n", verbose=self.verbose
|
||||
)
|
||||
if self.api_response_chain is not None:
|
||||
_answer = self.api_response_chain.predict_and_parse(
|
||||
response=response_text,
|
||||
instructions=instructions,
|
||||
)
|
||||
answer = cast(str, _answer)
|
||||
self.callback_manager.on_text(
|
||||
answer, color="yellow", end="\n", verbose=self.verbose
|
||||
)
|
||||
return self._get_output(answer, intermediate_steps)
|
||||
else:
|
||||
return self._get_output(response_text, intermediate_steps)
|
||||
_answer = self.api_response_chain.predict_and_parse(
|
||||
response=response_text,
|
||||
instructions=instructions,
|
||||
)
|
||||
answer = cast(str, _answer)
|
||||
self.callback_manager.on_text(
|
||||
answer, color="yellow", end="\n", verbose=self.verbose
|
||||
)
|
||||
return self._get_output(answer, intermediate_steps)
|
||||
|
||||
@classmethod
|
||||
def from_url_and_method(
|
||||
@@ -187,7 +184,6 @@ class OpenAPIEndpointChain(Chain, BaseModel):
|
||||
requests: Optional[Requests] = None,
|
||||
verbose: bool = False,
|
||||
return_intermediate_steps: bool = False,
|
||||
raw_response: bool = False,
|
||||
**kwargs: Any
|
||||
# TODO: Handle async
|
||||
) -> "OpenAPIEndpointChain":
|
||||
@@ -200,10 +196,7 @@ class OpenAPIEndpointChain(Chain, BaseModel):
|
||||
requests_chain = APIRequesterChain.from_llm_and_typescript(
|
||||
llm, typescript_definition=operation.to_typescript(), verbose=verbose
|
||||
)
|
||||
if raw_response:
|
||||
response_chain = None
|
||||
else:
|
||||
response_chain = APIResponderChain.from_llm(llm, verbose=verbose)
|
||||
response_chain = APIResponderChain.from_llm(llm, verbose=verbose)
|
||||
_requests = requests or Requests()
|
||||
return cls(
|
||||
api_request_chain=requests_chain,
|
||||
|
||||
@@ -57,7 +57,7 @@ class LLMRequestsChain(Chain):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import bs4 python package. "
|
||||
"Please install it with `pip install bs4`."
|
||||
"Please it install it with `pip install bs4`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -55,7 +55,7 @@ class OpenAIModerationChain(Chain):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import openai python package. "
|
||||
"Please install it with `pip install openai`."
|
||||
"Please it install it with `pip install openai`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -53,7 +53,7 @@ class Crawler:
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import playwright python package. "
|
||||
"Please install it with `pip install playwright`."
|
||||
"Please it install it with `pip install playwright`."
|
||||
)
|
||||
self.browser: Browser = (
|
||||
sync_playwright().start().chromium.launch(headless=False)
|
||||
|
||||
@@ -87,7 +87,7 @@ class AzureChatOpenAI(ChatOpenAI):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import openai python package. "
|
||||
"Please install it with `pip install openai`."
|
||||
"Please it install it with `pip install openai`."
|
||||
)
|
||||
try:
|
||||
values["client"] = openai.ChatCompletion
|
||||
|
||||
@@ -167,7 +167,7 @@ class ChatOpenAI(BaseChatModel):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import openai python package. "
|
||||
"Please install it with `pip install openai`."
|
||||
"Please it install it with `pip install openai`."
|
||||
)
|
||||
try:
|
||||
values["client"] = openai.ChatCompletion
|
||||
@@ -336,7 +336,7 @@ class ChatOpenAI(BaseChatModel):
|
||||
raise ValueError(
|
||||
"Could not import tiktoken python package. "
|
||||
"This is needed in order to calculate get_num_tokens. "
|
||||
"Please install it with `pip install tiktoken`."
|
||||
"Please it install it with `pip install tiktoken`."
|
||||
)
|
||||
# create a GPT-3.5-Turbo encoder instance
|
||||
enc = tiktoken.encoding_for_model(self.model_name)
|
||||
@@ -358,7 +358,7 @@ class ChatOpenAI(BaseChatModel):
|
||||
raise ValueError(
|
||||
"Could not import tiktoken python package. "
|
||||
"This is needed in order to calculate get_num_tokens. "
|
||||
"Please install it with `pip install tiktoken`."
|
||||
"Please it install it with `pip install tiktoken`."
|
||||
)
|
||||
|
||||
model = self.model_name
|
||||
|
||||
@@ -10,7 +10,6 @@ from langchain.document_loaders.azure_blob_storage_file import (
|
||||
AzureBlobStorageFileLoader,
|
||||
)
|
||||
from langchain.document_loaders.bigquery import BigQueryLoader
|
||||
from langchain.document_loaders.bilibili import BiliBiliLoader
|
||||
from langchain.document_loaders.blackboard import BlackboardLoader
|
||||
from langchain.document_loaders.college_confidential import CollegeConfidentialLoader
|
||||
from langchain.document_loaders.conllu import CoNLLULoader
|
||||
@@ -137,5 +136,4 @@ __all__ = [
|
||||
"SitemapLoader",
|
||||
"DuckDBLoader",
|
||||
"BigQueryLoader",
|
||||
"BiliBiliLoader",
|
||||
]
|
||||
|
||||
@@ -24,7 +24,7 @@ class AzureBlobStorageContainerLoader(BaseLoader):
|
||||
except ImportError as exc:
|
||||
raise ValueError(
|
||||
"Could not import azure storage blob python package. "
|
||||
"Please install it with `pip install azure-storage-blob`."
|
||||
"Please it install it with `pip install azure-storage-blob`."
|
||||
) from exc
|
||||
|
||||
container = ContainerClient.from_connection_string(
|
||||
|
||||
@@ -24,7 +24,7 @@ class AzureBlobStorageFileLoader(BaseLoader):
|
||||
except ImportError as exc:
|
||||
raise ValueError(
|
||||
"Could not import azure storage blob python package. "
|
||||
"Please install it with `pip install azure-storage-blob`."
|
||||
"Please it install it with `pip install azure-storage-blob`."
|
||||
) from exc
|
||||
|
||||
client = BlobClient.from_connection_string(
|
||||
|
||||
@@ -35,7 +35,7 @@ class DuckDBLoader(BaseLoader):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import duckdb python package. "
|
||||
"Please install it with `pip install duckdb`."
|
||||
"Please it install it with `pip install duckdb`."
|
||||
)
|
||||
|
||||
docs = []
|
||||
|
||||
@@ -22,7 +22,7 @@ class GCSDirectoryLoader(BaseLoader):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import google-cloud-storage python package. "
|
||||
"Please install it with `pip install google-cloud-storage`."
|
||||
"Please it install it with `pip install google-cloud-storage`."
|
||||
)
|
||||
client = storage.Client(project=self.project_name)
|
||||
docs = []
|
||||
|
||||
@@ -23,7 +23,7 @@ class GCSFileLoader(BaseLoader):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import google-cloud-storage python package. "
|
||||
"Please install it with `pip install google-cloud-storage`."
|
||||
"Please it install it with `pip install google-cloud-storage`."
|
||||
)
|
||||
|
||||
# Initialise a client
|
||||
|
||||
@@ -21,7 +21,7 @@ class S3DirectoryLoader(BaseLoader):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import boto3 python package. "
|
||||
"Please install it with `pip install boto3`."
|
||||
"Please it install it with `pip install boto3`."
|
||||
)
|
||||
s3 = boto3.resource("s3")
|
||||
bucket = s3.Bucket(self.bucket)
|
||||
|
||||
@@ -23,7 +23,7 @@ class S3FileLoader(BaseLoader):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import boto3 python package. "
|
||||
"Please install it with `pip install boto3`."
|
||||
"Please it install it with `pip install boto3`."
|
||||
)
|
||||
s3 = boto3.client("s3")
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
|
||||
@@ -47,32 +47,22 @@ class UnstructuredURLLoader(BaseLoader):
|
||||
|
||||
return unstructured_version >= (0, 5, 7)
|
||||
|
||||
def __is_non_html_available(self) -> bool:
|
||||
_unstructured_version = self.__version.split("-")[0]
|
||||
unstructured_version = tuple([int(x) for x in _unstructured_version.split(".")])
|
||||
|
||||
return unstructured_version >= (0, 5, 12)
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
"""Load file."""
|
||||
from unstructured.partition.auto import partition
|
||||
from unstructured.partition.html import partition_html
|
||||
|
||||
docs: List[Document] = list()
|
||||
for url in self.urls:
|
||||
try:
|
||||
if self.headers and self.__is_headers_available():
|
||||
if self.__is_headers_available():
|
||||
elements = partition_html(
|
||||
url=url, headers=self.headers, **self.unstructured_kwargs
|
||||
)
|
||||
elif self.__is_non_html_available():
|
||||
elements = partition(url=url, **self.unstructured_kwargs)
|
||||
else:
|
||||
elements = partition_html(url=url, **self.unstructured_kwargs)
|
||||
except Exception as e:
|
||||
if self.continue_on_failure:
|
||||
logger.error(f"Error fetching or processing {url}, exeption: {e}")
|
||||
continue
|
||||
else:
|
||||
raise e
|
||||
text = "\n\n".join([str(el) for el in elements])
|
||||
|
||||
@@ -106,8 +106,8 @@ class YoutubeLoader(BaseLoader):
|
||||
self.language = language
|
||||
|
||||
@classmethod
|
||||
def from_youtube_url(cls, youtube_url: str, **kwargs: Any) -> YoutubeLoader:
|
||||
"""Given youtube URL, load video."""
|
||||
def from_youtube_channel(cls, youtube_url: str, **kwargs: Any) -> YoutubeLoader:
|
||||
"""Given a channel name, load all videos."""
|
||||
video_id = youtube_url.split("youtube.com/watch?v=")[-1]
|
||||
return cls(video_id, **kwargs)
|
||||
|
||||
@@ -118,7 +118,7 @@ class YoutubeLoader(BaseLoader):
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Could not import youtube_transcript_api python package. "
|
||||
"Please install it with `pip install youtube-transcript-api`."
|
||||
"Please it install it with `pip install youtube-transcript-api`."
|
||||
)
|
||||
|
||||
metadata = {"source": self.video_id}
|
||||
@@ -159,7 +159,7 @@ class YoutubeLoader(BaseLoader):
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"Could not import pytube python package. "
|
||||
"Please install it with `pip install pytube`."
|
||||
"Please it install it with `pip install pytube`."
|
||||
)
|
||||
yt = YouTube(f"https://www.youtube.com/watch?v={self.video_id}")
|
||||
video_info = {
|
||||
|
||||
@@ -58,7 +58,7 @@ class AlephAlphaAsymmetricSemanticEmbedding(BaseModel, Embeddings):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import aleph_alpha_client python package. "
|
||||
"Please install it with `pip install aleph_alpha_client`."
|
||||
"Please it install it with `pip install aleph_alpha_client`."
|
||||
)
|
||||
values["client"] = Client(token=aleph_alpha_api_key)
|
||||
return values
|
||||
@@ -81,7 +81,7 @@ class AlephAlphaAsymmetricSemanticEmbedding(BaseModel, Embeddings):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import aleph_alpha_client python package. "
|
||||
"Please install it with `pip install aleph_alpha_client`."
|
||||
"Please it install it with `pip install aleph_alpha_client`."
|
||||
)
|
||||
document_embeddings = []
|
||||
|
||||
@@ -121,7 +121,7 @@ class AlephAlphaAsymmetricSemanticEmbedding(BaseModel, Embeddings):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import aleph_alpha_client python package. "
|
||||
"Please install it with `pip install aleph_alpha_client`."
|
||||
"Please it install it with `pip install aleph_alpha_client`."
|
||||
)
|
||||
symmetric_params = {
|
||||
"prompt": Prompt.from_text(text),
|
||||
@@ -166,7 +166,7 @@ class AlephAlphaSymmetricSemanticEmbedding(AlephAlphaAsymmetricSemanticEmbedding
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import aleph_alpha_client python package. "
|
||||
"Please install it with `pip install aleph_alpha_client`."
|
||||
"Please it install it with `pip install aleph_alpha_client`."
|
||||
)
|
||||
query_params = {
|
||||
"prompt": Prompt.from_text(text),
|
||||
|
||||
@@ -48,7 +48,7 @@ class CohereEmbeddings(BaseModel, Embeddings):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import cohere python package. "
|
||||
"Please install it with `pip install cohere`."
|
||||
"Please it install it with `pip install cohere`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -73,7 +73,7 @@ class HuggingFaceHubEmbeddings(BaseModel, Embeddings):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import huggingface_hub python package. "
|
||||
"Please install it with `pip install huggingface_hub`."
|
||||
"Please it install it with `pip install huggingface_hub`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ class JinaEmbeddings(BaseModel, Embeddings):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import `jina` python package. "
|
||||
"Please install it with `pip install jina`."
|
||||
"Please it install it with `pip install jina`."
|
||||
)
|
||||
|
||||
# Setup client
|
||||
|
||||
@@ -188,7 +188,7 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import openai python package. "
|
||||
"Please install it with `pip install openai`."
|
||||
"Please it install it with `pip install openai`."
|
||||
)
|
||||
return values
|
||||
|
||||
@@ -242,7 +242,7 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
|
||||
raise ValueError(
|
||||
"Could not import tiktoken python package. "
|
||||
"This is needed in order to for OpenAIEmbeddings. "
|
||||
"Please install it with `pip install tiktoken`."
|
||||
"Please it install it with `pip install tiktoken`."
|
||||
)
|
||||
|
||||
def _embedding_func(self, text: str, *, engine: str) -> List[float]:
|
||||
|
||||
@@ -131,7 +131,7 @@ class SagemakerEndpointEmbeddings(BaseModel, Embeddings):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import boto3 python package. "
|
||||
"Please install it with `pip install boto3`."
|
||||
"Please it install it with `pip install boto3`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
from langchain.prompts import PromptTemplate
|
||||
|
||||
template = """You are a teacher grading a quiz.
|
||||
You are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.
|
||||
You are given a question, the student's answer, and the true answer, and are asked to score it as either CORRECT or INCORRECT.
|
||||
|
||||
Example Format:
|
||||
QUESTION: question here
|
||||
@@ -10,7 +10,7 @@ STUDENT ANSWER: student's answer here
|
||||
TRUE ANSWER: true answer here
|
||||
GRADE: CORRECT or INCORRECT here
|
||||
|
||||
Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin!
|
||||
Please remember to grade them based on being factually accurate. Begin!
|
||||
|
||||
QUESTION: {query}
|
||||
STUDENT ANSWER: {result}
|
||||
@@ -21,7 +21,7 @@ PROMPT = PromptTemplate(
|
||||
)
|
||||
|
||||
context_template = """You are a teacher grading a quiz.
|
||||
You are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.
|
||||
You are given a question, the contex the question is about, and the student's answer You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.
|
||||
|
||||
Example Format:
|
||||
QUESTION: question here
|
||||
@@ -29,7 +29,7 @@ CONTEXT: context the question is about here
|
||||
STUDENT ANSWER: student's answer here
|
||||
GRADE: CORRECT or INCORRECT here
|
||||
|
||||
Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin!
|
||||
Please remember to grade them based on being factually accurate. Begin!
|
||||
|
||||
QUESTION: {query}
|
||||
CONTEXT: {context}
|
||||
@@ -41,7 +41,7 @@ CONTEXT_PROMPT = PromptTemplate(
|
||||
|
||||
|
||||
cot_template = """You are a teacher grading a quiz.
|
||||
You are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.
|
||||
You are given a question, the contex the question is about, and the student's answer You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.
|
||||
Write out in a step by step manner your reasoning to be sure that your conclusion is correct. Avoid simply stating the correct answer at the outset.
|
||||
|
||||
Example Format:
|
||||
@@ -51,7 +51,7 @@ STUDENT ANSWER: student's answer here
|
||||
EXPLANATION: step by step reasoning here
|
||||
GRADE: CORRECT or INCORRECT here
|
||||
|
||||
Grade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin!
|
||||
Please remember to grade them based on being factually accurate. Begin!
|
||||
|
||||
QUESTION: {query}
|
||||
CONTEXT: {context}
|
||||
|
||||
@@ -56,7 +56,7 @@ class NetworkxEntityGraph:
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import networkx python package. "
|
||||
"Please install it with `pip install networkx`."
|
||||
"Please it install it with `pip install networkx`."
|
||||
)
|
||||
if graph is not None:
|
||||
if not isinstance(graph, nx.DiGraph):
|
||||
@@ -72,7 +72,7 @@ class NetworkxEntityGraph:
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import networkx python package. "
|
||||
"Please install it with `pip install networkx`."
|
||||
"Please it install it with `pip install networkx`."
|
||||
)
|
||||
graph = nx.read_gml(gml_path)
|
||||
return cls(graph)
|
||||
|
||||
@@ -149,7 +149,7 @@ class AlephAlpha(LLM):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import aleph_alpha_client python package. "
|
||||
"Please install it with `pip install aleph_alpha_client`."
|
||||
"Please it install it with `pip install aleph_alpha_client`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ class Anthropic(LLM):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import anthropic python package. "
|
||||
"Please install it with `pip install anthropic`."
|
||||
"Please it install it with `pip install anthropic`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -73,7 +73,7 @@ class Cohere(LLM):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import cohere python package. "
|
||||
"Please install it with `pip install cohere`."
|
||||
"Please it install it with `pip install cohere`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -70,7 +70,7 @@ class HuggingFaceEndpoint(LLM):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import huggingface_hub python package. "
|
||||
"Please install it with `pip install huggingface_hub`."
|
||||
"Please it install it with `pip install huggingface_hub`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -66,7 +66,7 @@ class HuggingFaceHub(LLM):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import huggingface_hub python package. "
|
||||
"Please install it with `pip install huggingface_hub`."
|
||||
"Please it install it with `pip install huggingface_hub`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ class ManifestWrapper(LLM):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import manifest python package. "
|
||||
"Please install it with `pip install manifest-ml`."
|
||||
"Please it install it with `pip install manifest-ml`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -76,7 +76,7 @@ class NLPCloud(LLM):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import nlpcloud python package. "
|
||||
"Please install it with `pip install nlpcloud`."
|
||||
"Please it install it with `pip install nlpcloud`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -446,7 +446,7 @@ class BaseOpenAI(BaseLLM):
|
||||
raise ValueError(
|
||||
"Could not import tiktoken python package. "
|
||||
"This is needed in order to calculate get_num_tokens. "
|
||||
"Please install it with `pip install tiktoken`."
|
||||
"Please it install it with `pip install tiktoken`."
|
||||
)
|
||||
encoder = "gpt2"
|
||||
if self.model_name in ("text-davinci-003", "text-davinci-002"):
|
||||
@@ -611,7 +611,7 @@ class OpenAIChat(BaseLLM):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import openai python package. "
|
||||
"Please install it with `pip install openai`."
|
||||
"Please it install it with `pip install openai`."
|
||||
)
|
||||
try:
|
||||
values["client"] = openai.ChatCompletion
|
||||
@@ -742,7 +742,7 @@ class OpenAIChat(BaseLLM):
|
||||
raise ValueError(
|
||||
"Could not import tiktoken python package. "
|
||||
"This is needed in order to calculate get_num_tokens. "
|
||||
"Please install it with `pip install tiktoken`."
|
||||
"Please it install it with `pip install tiktoken`."
|
||||
)
|
||||
# create a GPT-3.5-Turbo encoder instance
|
||||
enc = tiktoken.encoding_for_model("gpt-3.5-turbo")
|
||||
|
||||
@@ -176,7 +176,7 @@ class SagemakerEndpoint(LLM):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import boto3 python package. "
|
||||
"Please install it with `pip install boto3`."
|
||||
"Please it install it with `pip install boto3`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -28,10 +28,11 @@ class ConversationBufferWindowMemory(BaseChatMemory):
|
||||
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, str]:
|
||||
"""Return history buffer."""
|
||||
|
||||
buffer: Any = self.buffer[-self.k * 2 :] if self.k > 0 else []
|
||||
if not self.return_messages:
|
||||
if self.return_messages:
|
||||
buffer: Any = self.buffer[-self.k * 2 :]
|
||||
else:
|
||||
buffer = get_buffer_string(
|
||||
buffer,
|
||||
self.buffer[-self.k * 2 :],
|
||||
human_prefix=self.human_prefix,
|
||||
ai_prefix=self.ai_prefix,
|
||||
)
|
||||
|
||||
@@ -1,32 +1,30 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TypeVar
|
||||
from typing import Any
|
||||
|
||||
from langchain.chains.llm import LLMChain
|
||||
from langchain.output_parsers.prompts import NAIVE_FIX_PROMPT
|
||||
from langchain.prompts.base import BasePromptTemplate
|
||||
from langchain.schema import BaseLanguageModel, BaseOutputParser, OutputParserException
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
class OutputFixingParser(BaseOutputParser[T]):
|
||||
class OutputFixingParser(BaseOutputParser):
|
||||
"""Wraps a parser and tries to fix parsing errors."""
|
||||
|
||||
parser: BaseOutputParser[T]
|
||||
parser: BaseOutputParser
|
||||
retry_chain: LLMChain
|
||||
|
||||
@classmethod
|
||||
def from_llm(
|
||||
cls,
|
||||
llm: BaseLanguageModel,
|
||||
parser: BaseOutputParser[T],
|
||||
parser: BaseOutputParser,
|
||||
prompt: BasePromptTemplate = NAIVE_FIX_PROMPT,
|
||||
) -> OutputFixingParser[T]:
|
||||
) -> OutputFixingParser:
|
||||
chain = LLMChain(llm=llm, prompt=prompt)
|
||||
return cls(parser=parser, retry_chain=chain)
|
||||
|
||||
def parse(self, completion: str) -> T:
|
||||
def parse(self, completion: str) -> Any:
|
||||
try:
|
||||
parsed_completion = self.parser.parse(completion)
|
||||
except OutputParserException as e:
|
||||
|
||||
@@ -1,19 +1,17 @@
|
||||
import json
|
||||
import re
|
||||
from typing import Type, TypeVar
|
||||
from typing import Any
|
||||
|
||||
from pydantic import BaseModel, ValidationError
|
||||
|
||||
from langchain.output_parsers.format_instructions import PYDANTIC_FORMAT_INSTRUCTIONS
|
||||
from langchain.schema import BaseOutputParser, OutputParserException
|
||||
|
||||
T = TypeVar("T", bound=BaseModel)
|
||||
|
||||
class PydanticOutputParser(BaseOutputParser):
|
||||
pydantic_object: Any
|
||||
|
||||
class PydanticOutputParser(BaseOutputParser[T]):
|
||||
pydantic_object: Type[T]
|
||||
|
||||
def parse(self, text: str) -> T:
|
||||
def parse(self, text: str) -> BaseModel:
|
||||
try:
|
||||
# Greedy search for 1st json candidate.
|
||||
match = re.search(
|
||||
@@ -40,6 +38,6 @@ class PydanticOutputParser(BaseOutputParser[T]):
|
||||
if "type" in reduced_schema:
|
||||
del reduced_schema["type"]
|
||||
# Ensure json in context is well-formed with double quotes.
|
||||
schema_str = json.dumps(reduced_schema)
|
||||
schema = json.dumps(reduced_schema)
|
||||
|
||||
return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema_str)
|
||||
return PYDANTIC_FORMAT_INSTRUCTIONS.format(schema=schema)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import TypeVar
|
||||
from typing import Any
|
||||
|
||||
from langchain.chains.llm import LLMChain
|
||||
from langchain.prompts.base import BasePromptTemplate
|
||||
@@ -34,30 +34,28 @@ NAIVE_RETRY_WITH_ERROR_PROMPT = PromptTemplate.from_template(
|
||||
NAIVE_COMPLETION_RETRY_WITH_ERROR
|
||||
)
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
class RetryOutputParser(BaseOutputParser[T]):
|
||||
class RetryOutputParser(BaseOutputParser):
|
||||
"""Wraps a parser and tries to fix parsing errors.
|
||||
|
||||
Does this by passing the original prompt and the completion to another
|
||||
LLM, and telling it the completion did not satisfy criteria in the prompt.
|
||||
"""
|
||||
|
||||
parser: BaseOutputParser[T]
|
||||
parser: BaseOutputParser
|
||||
retry_chain: LLMChain
|
||||
|
||||
@classmethod
|
||||
def from_llm(
|
||||
cls,
|
||||
llm: BaseLanguageModel,
|
||||
parser: BaseOutputParser[T],
|
||||
parser: BaseOutputParser,
|
||||
prompt: BasePromptTemplate = NAIVE_RETRY_PROMPT,
|
||||
) -> RetryOutputParser[T]:
|
||||
) -> RetryOutputParser:
|
||||
chain = LLMChain(llm=llm, prompt=prompt)
|
||||
return cls(parser=parser, retry_chain=chain)
|
||||
|
||||
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:
|
||||
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> Any:
|
||||
try:
|
||||
parsed_completion = self.parser.parse(completion)
|
||||
except OutputParserException:
|
||||
@@ -68,7 +66,7 @@ class RetryOutputParser(BaseOutputParser[T]):
|
||||
|
||||
return parsed_completion
|
||||
|
||||
def parse(self, completion: str) -> T:
|
||||
def parse(self, completion: str) -> Any:
|
||||
raise NotImplementedError(
|
||||
"This OutputParser can only be called by the `parse_with_prompt` method."
|
||||
)
|
||||
@@ -77,7 +75,7 @@ class RetryOutputParser(BaseOutputParser[T]):
|
||||
return self.parser.get_format_instructions()
|
||||
|
||||
|
||||
class RetryWithErrorOutputParser(BaseOutputParser[T]):
|
||||
class RetryWithErrorOutputParser(BaseOutputParser):
|
||||
"""Wraps a parser and tries to fix parsing errors.
|
||||
|
||||
Does this by passing the original prompt, the completion, AND the error
|
||||
@@ -87,20 +85,20 @@ class RetryWithErrorOutputParser(BaseOutputParser[T]):
|
||||
LLM, which in theory should give it more information on how to fix it.
|
||||
"""
|
||||
|
||||
parser: BaseOutputParser[T]
|
||||
parser: BaseOutputParser
|
||||
retry_chain: LLMChain
|
||||
|
||||
@classmethod
|
||||
def from_llm(
|
||||
cls,
|
||||
llm: BaseLanguageModel,
|
||||
parser: BaseOutputParser[T],
|
||||
parser: BaseOutputParser,
|
||||
prompt: BasePromptTemplate = NAIVE_RETRY_WITH_ERROR_PROMPT,
|
||||
) -> RetryWithErrorOutputParser[T]:
|
||||
) -> RetryWithErrorOutputParser:
|
||||
chain = LLMChain(llm=llm, prompt=prompt)
|
||||
return cls(parser=parser, retry_chain=chain)
|
||||
|
||||
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T:
|
||||
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> Any:
|
||||
try:
|
||||
parsed_completion = self.parser.parse(completion)
|
||||
except OutputParserException as e:
|
||||
@@ -111,7 +109,7 @@ class RetryWithErrorOutputParser(BaseOutputParser[T]):
|
||||
|
||||
return parsed_completion
|
||||
|
||||
def parse(self, completion: str) -> T:
|
||||
def parse(self, completion: str) -> Any:
|
||||
raise NotImplementedError(
|
||||
"This OutputParser can only be called by the `parse_with_prompt` method."
|
||||
)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
from typing import Any, List
|
||||
from typing import List
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
@@ -37,7 +37,7 @@ class StructuredOutputParser(BaseOutputParser):
|
||||
)
|
||||
return STRUCTURED_FORMAT_INSTRUCTIONS.format(format=schema_str)
|
||||
|
||||
def parse(self, text: str) -> Any:
|
||||
def parse(self, text: str) -> BaseModel:
|
||||
json_string = text.split("```json")[1].strip().strip("```").strip()
|
||||
try:
|
||||
json_obj = json.loads(json_string)
|
||||
|
||||
@@ -1,23 +1,32 @@
|
||||
"""Taken from: https://docs.pinecone.io/docs/hybrid-search"""
|
||||
import hashlib
|
||||
from typing import Any, Dict, List, Optional
|
||||
"""Taken from: https://www.pinecone.io/learn/hybrid-search-intro/"""
|
||||
from collections import Counter
|
||||
from typing import Any, Dict, List, Tuple
|
||||
|
||||
from pydantic import BaseModel, Extra, root_validator
|
||||
from pydantic import BaseModel, Extra
|
||||
|
||||
from langchain.embeddings.base import Embeddings
|
||||
from langchain.schema import BaseRetriever, Document
|
||||
|
||||
|
||||
def hash_text(text: str) -> str:
|
||||
return str(hashlib.sha256(text.encode("utf-8")).hexdigest())
|
||||
def build_dict(input_batch: List[List[int]]) -> List[Dict]:
|
||||
# store a batch of sparse embeddings
|
||||
sparse_emb = []
|
||||
# iterate through input batch
|
||||
for token_ids in input_batch:
|
||||
indices = []
|
||||
values = []
|
||||
# convert the input_ids list to a dictionary of key to frequency values
|
||||
d = dict(Counter(token_ids))
|
||||
for idx in d:
|
||||
indices.append(idx)
|
||||
values.append(d[idx])
|
||||
sparse_emb.append({"indices": indices, "values": values})
|
||||
# return sparse_emb list
|
||||
return sparse_emb
|
||||
|
||||
|
||||
def create_index(
|
||||
contexts: List[str],
|
||||
index: Any,
|
||||
embeddings: Embeddings,
|
||||
sparse_encoder: Any,
|
||||
ids: Optional[List[str]] = None,
|
||||
contexts: List[str], index: Any, embeddings: Embeddings, tokenizer: Any
|
||||
) -> None:
|
||||
batch_size = 32
|
||||
_iterator = range(0, len(contexts), batch_size)
|
||||
@@ -28,33 +37,28 @@ def create_index(
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
if ids is None:
|
||||
# create unique ids using hash of the text
|
||||
ids = [hash_text(context) for context in contexts]
|
||||
|
||||
for i in _iterator:
|
||||
# find end of batch
|
||||
i_end = min(i + batch_size, len(contexts))
|
||||
# extract batch
|
||||
context_batch = contexts[i:i_end]
|
||||
batch_ids = ids[i:i_end]
|
||||
# create unique IDs
|
||||
ids = [str(x) for x in range(i, i_end)]
|
||||
# add context passages as metadata
|
||||
meta = [{"context": context} for context in context_batch]
|
||||
# create dense vectors
|
||||
dense_embeds = embeddings.embed_documents(context_batch)
|
||||
# create sparse vectors
|
||||
sparse_embeds = sparse_encoder.encode_documents(context_batch)
|
||||
sparse_embeds = generate_sparse_vectors(context_batch, tokenizer)
|
||||
for s in sparse_embeds:
|
||||
s["values"] = [float(s1) for s1 in s["values"]]
|
||||
|
||||
vectors = []
|
||||
# loop through the data and create dictionaries for upserts
|
||||
for doc_id, sparse, dense, metadata in zip(
|
||||
batch_ids, sparse_embeds, dense_embeds, meta
|
||||
):
|
||||
for _id, sparse, dense, metadata in zip(ids, sparse_embeds, dense_embeds, meta):
|
||||
vectors.append(
|
||||
{
|
||||
"id": doc_id,
|
||||
"id": _id,
|
||||
"sparse_values": sparse,
|
||||
"values": dense,
|
||||
"metadata": metadata,
|
||||
@@ -65,10 +69,38 @@ def create_index(
|
||||
index.upsert(vectors)
|
||||
|
||||
|
||||
def generate_sparse_vectors(context_batch: List[str], tokenizer: Any) -> List[Dict]:
|
||||
# create batch of input_ids
|
||||
inputs = tokenizer(
|
||||
context_batch,
|
||||
padding=True,
|
||||
truncation=True,
|
||||
max_length=512, # special_tokens=False
|
||||
)["input_ids"]
|
||||
# create sparse dictionaries
|
||||
sparse_embeds = build_dict(inputs)
|
||||
return sparse_embeds
|
||||
|
||||
|
||||
def hybrid_scale(
|
||||
dense: List[float], sparse: Dict, alpha: float
|
||||
) -> Tuple[List[float], Dict]:
|
||||
# check alpha value is in range
|
||||
if alpha < 0 or alpha > 1:
|
||||
raise ValueError("Alpha must be between 0 and 1")
|
||||
# scale sparse and dense vectors to create hybrid search vecs
|
||||
hsparse = {
|
||||
"indices": sparse["indices"],
|
||||
"values": [v * (1 - alpha) for v in sparse["values"]],
|
||||
}
|
||||
hdense = [v * alpha for v in dense]
|
||||
return hdense, hsparse
|
||||
|
||||
|
||||
class PineconeHybridSearchRetriever(BaseRetriever, BaseModel):
|
||||
embeddings: Embeddings
|
||||
sparse_encoder: Any
|
||||
index: Any
|
||||
tokenizer: Any
|
||||
top_k: int = 4
|
||||
alpha: float = 0.5
|
||||
|
||||
@@ -78,32 +110,15 @@ class PineconeHybridSearchRetriever(BaseRetriever, BaseModel):
|
||||
extra = Extra.forbid
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
def add_texts(self, texts: List[str], ids: Optional[List[str]] = None) -> None:
|
||||
create_index(texts, self.index, self.embeddings, self.sparse_encoder, ids=ids)
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key and python package exists in environment."""
|
||||
try:
|
||||
from pinecone_text.hybrid import hybrid_convex_scale # noqa:F401
|
||||
from pinecone_text.sparse.base_sparse_encoder import (
|
||||
BaseSparseEncoder, # noqa:F401
|
||||
)
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import pinecone_text python package. "
|
||||
"Please install it with `pip install pinecone_text`."
|
||||
)
|
||||
return values
|
||||
def add_texts(self, texts: List[str]) -> None:
|
||||
create_index(texts, self.index, self.embeddings, self.tokenizer)
|
||||
|
||||
def get_relevant_documents(self, query: str) -> List[Document]:
|
||||
from pinecone_text.hybrid import hybrid_convex_scale
|
||||
|
||||
sparse_vec = self.sparse_encoder.encode_queries(query)
|
||||
sparse_vec = generate_sparse_vectors([query], self.tokenizer)[0]
|
||||
# convert the question into a dense vector
|
||||
dense_vec = self.embeddings.embed_query(query)
|
||||
# scale alpha with hybrid_scale
|
||||
dense_vec, sparse_vec = hybrid_convex_scale(dense_vec, sparse_vec, self.alpha)
|
||||
dense_vec, sparse_vec = hybrid_scale(dense_vec, sparse_vec, self.alpha)
|
||||
sparse_vec["values"] = [float(s1) for s1 in sparse_vec["values"]]
|
||||
# query pinecone with the query parameters
|
||||
result = self.index.query(
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""Wrapper around weaviate vector database."""
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import Any, Dict, List
|
||||
from uuid import uuid4
|
||||
|
||||
from pydantic import Extra
|
||||
@@ -18,7 +18,6 @@ class WeaviateHybridSearchRetriever(BaseRetriever):
|
||||
text_key: str,
|
||||
alpha: float = 0.5,
|
||||
k: int = 4,
|
||||
attributes: Optional[List[str]] = None,
|
||||
):
|
||||
try:
|
||||
import weaviate
|
||||
@@ -37,8 +36,6 @@ class WeaviateHybridSearchRetriever(BaseRetriever):
|
||||
self._index_name = index_name
|
||||
self._text_key = text_key
|
||||
self._query_attrs = [self._text_key]
|
||||
if attributes is not None:
|
||||
self._query_attrs.extend(attributes)
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
@@ -70,8 +67,6 @@ class WeaviateHybridSearchRetriever(BaseRetriever):
|
||||
result = (
|
||||
query_obj.with_hybrid(content, alpha=self.alpha).with_limit(self.k).do()
|
||||
)
|
||||
if "errors" in result:
|
||||
raise ValueError(f"Error during query: {result['errors']}")
|
||||
|
||||
docs = []
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any, Dict, Generic, List, NamedTuple, Optional, TypeVar
|
||||
from typing import Any, Dict, List, NamedTuple, Optional
|
||||
|
||||
from pydantic import BaseModel, Extra, Field, root_validator
|
||||
|
||||
@@ -194,7 +194,7 @@ class BaseLanguageModel(BaseModel, ABC):
|
||||
raise ValueError(
|
||||
"Could not import transformers python package. "
|
||||
"This is needed in order to calculate get_num_tokens. "
|
||||
"Please install it with `pip install transformers`."
|
||||
"Please it install it with `pip install transformers`."
|
||||
)
|
||||
# create a GPT-3 tokenizer instance
|
||||
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
|
||||
@@ -327,17 +327,15 @@ class BaseRetriever(ABC):
|
||||
|
||||
Memory = BaseMemory
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
class BaseOutputParser(BaseModel, ABC, Generic[T]):
|
||||
class BaseOutputParser(BaseModel, ABC):
|
||||
"""Class to parse the output of an LLM call.
|
||||
|
||||
Output parsers help structure language model responses.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
def parse(self, text: str) -> T:
|
||||
def parse(self, text: str) -> Any:
|
||||
"""Parse the output of an LLM call.
|
||||
|
||||
A method which takes in a string (assumed output of language model )
|
||||
|
||||
@@ -131,7 +131,7 @@ class TextSplitter(ABC):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import transformers python package. "
|
||||
"Please install it with `pip install transformers`."
|
||||
"Please it install it with `pip install transformers`."
|
||||
)
|
||||
return cls(length_function=_huggingface_tokenizer_length, **kwargs)
|
||||
|
||||
@@ -150,7 +150,7 @@ class TextSplitter(ABC):
|
||||
raise ValueError(
|
||||
"Could not import tiktoken python package. "
|
||||
"This is needed in order to calculate max_tokens_for_prompt. "
|
||||
"Please install it with `pip install tiktoken`."
|
||||
"Please it install it with `pip install tiktoken`."
|
||||
)
|
||||
|
||||
# create a GPT-3 encoder instance
|
||||
@@ -205,7 +205,7 @@ class TokenTextSplitter(TextSplitter):
|
||||
raise ValueError(
|
||||
"Could not import tiktoken python package. "
|
||||
"This is needed in order to for TokenTextSplitter. "
|
||||
"Please install it with `pip install tiktoken`."
|
||||
"Please it install it with `pip install tiktoken`."
|
||||
)
|
||||
# create a GPT-3 encoder instance
|
||||
self._tokenizer = tiktoken.get_encoding(encoding_name)
|
||||
|
||||
@@ -17,8 +17,6 @@ class ApiConfig(BaseModel):
|
||||
|
||||
|
||||
class AIPlugin(BaseModel):
|
||||
"""AI Plugin Definition."""
|
||||
|
||||
schema_version: str
|
||||
name_for_model: str
|
||||
name_for_human: str
|
||||
@@ -30,12 +28,6 @@ class AIPlugin(BaseModel):
|
||||
contact_email: Optional[str]
|
||||
legal_info_url: Optional[str]
|
||||
|
||||
@classmethod
|
||||
def from_url(cls, url: str) -> AIPlugin:
|
||||
"""Instantiate AIPlugin from a URL."""
|
||||
response = requests.get(url).json()
|
||||
return cls(**response)
|
||||
|
||||
|
||||
def marshal_spec(txt: str) -> dict:
|
||||
"""Convert the yaml or json serialized spec to a dict."""
|
||||
@@ -51,7 +43,8 @@ class AIPluginTool(BaseTool):
|
||||
|
||||
@classmethod
|
||||
def from_plugin_url(cls, url: str) -> AIPluginTool:
|
||||
plugin = AIPlugin.from_url(url)
|
||||
response = requests.get(url).json()
|
||||
plugin = AIPlugin(**response)
|
||||
description = (
|
||||
f"Call this tool to get the OpenAPI spec (and usage guide) "
|
||||
f"for interacting with the {plugin.name_for_human} API. "
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
|
||||
import ast
|
||||
import sys
|
||||
from io import StringIO
|
||||
from typing import Dict, Optional
|
||||
|
||||
from pydantic import Field, root_validator
|
||||
@@ -78,16 +77,8 @@ class PythonAstREPLTool(BaseTool):
|
||||
try:
|
||||
return eval(module_end_str, self.globals, self.locals)
|
||||
except Exception:
|
||||
old_stdout = sys.stdout
|
||||
sys.stdout = mystdout = StringIO()
|
||||
try:
|
||||
exec(module_end_str, self.globals, self.locals)
|
||||
sys.stdout = old_stdout
|
||||
output = mystdout.getvalue()
|
||||
except Exception as e:
|
||||
sys.stdout = old_stdout
|
||||
output = str(e)
|
||||
return output
|
||||
exec(module_end_str, self.globals, self.locals)
|
||||
return ""
|
||||
except Exception as e:
|
||||
return str(e)
|
||||
|
||||
|
||||
@@ -31,8 +31,8 @@ Typically you'd use SequentialChain, here's a basic example:
|
||||
|
||||
1. Use NLA to find an email in Gmail
|
||||
2. Use LLMChain to generate a draft reply to (1)
|
||||
3. Use NLA to send the draft reply (2) to someone in Slack via direct message
|
||||
|
||||
3. Use NLA to send the draft reply (2) to someone in Slack via direct mesage
|
||||
|
||||
In code, below:
|
||||
|
||||
```python
|
||||
@@ -61,9 +61,6 @@ from langchain.utilities.zapier import ZapierNLAWrapper
|
||||
|
||||
llm = OpenAI(temperature=0)
|
||||
zapier = ZapierNLAWrapper()
|
||||
## To leverage a nla_oauth_access_token you may pass the value to the ZapierNLAWrapper
|
||||
## If you do this there is no need to initialize the ZAPIER_NLA_API_KEY env variable
|
||||
# zapier = ZapierNLAWrapper(zapier_nla_oauth_access_token="TOKEN_HERE")
|
||||
toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
|
||||
agent = initialize_agent(
|
||||
toolkit.get_tools(),
|
||||
|
||||
@@ -31,7 +31,7 @@ class WikipediaAPIWrapper(BaseModel):
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import wikipedia python package. "
|
||||
"Please install it with `pip install wikipedia`."
|
||||
"Please it install it with `pip install wikipedia`."
|
||||
)
|
||||
return values
|
||||
|
||||
|
||||
@@ -37,7 +37,6 @@ class ZapierNLAWrapper(BaseModel):
|
||||
"""
|
||||
|
||||
zapier_nla_api_key: str
|
||||
zapier_nla_oauth_access_token: str
|
||||
zapier_nla_api_base: str = "https://nla.zapier.com/api/v1/"
|
||||
|
||||
class Config:
|
||||
@@ -53,14 +52,7 @@ class ZapierNLAWrapper(BaseModel):
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
)
|
||||
|
||||
if self.zapier_nla_oauth_access_token:
|
||||
session.headers.update(
|
||||
{"Authorization": f"Bearer {self.zapier_nla_oauth_access_token}"}
|
||||
)
|
||||
else:
|
||||
session.params = {"api_key": self.zapier_nla_api_key}
|
||||
|
||||
session.params = {"api_key": self.zapier_nla_api_key}
|
||||
return session
|
||||
|
||||
def _get_action_request(
|
||||
@@ -81,24 +73,9 @@ class ZapierNLAWrapper(BaseModel):
|
||||
@root_validator(pre=True)
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that api key exists in environment."""
|
||||
|
||||
zapier_nla_api_key_default = None
|
||||
|
||||
# If there is a oauth_access_key passed in the values
|
||||
# we don't need a nla_api_key it can be blank
|
||||
if "zapier_nla_oauth_access_token" in values:
|
||||
zapier_nla_api_key_default = ""
|
||||
else:
|
||||
values["zapier_nla_oauth_access_token"] = ""
|
||||
|
||||
# we require at least one API Key
|
||||
zapier_nla_api_key = get_from_dict_or_env(
|
||||
values,
|
||||
"zapier_nla_api_key",
|
||||
"ZAPIER_NLA_API_KEY",
|
||||
zapier_nla_api_key_default,
|
||||
values, "zapier_nla_api_key", "ZAPIER_NLA_API_KEY"
|
||||
)
|
||||
|
||||
values["zapier_nla_api_key"] = zapier_nla_api_key
|
||||
|
||||
return values
|
||||
|
||||
@@ -3,7 +3,7 @@ from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import uuid
|
||||
from typing import Any, Iterable, List, Optional, Type
|
||||
from typing import Any, Iterable, List, Optional
|
||||
|
||||
import numpy as np
|
||||
|
||||
@@ -210,7 +210,7 @@ class AtlasDB(VectorStore):
|
||||
|
||||
@classmethod
|
||||
def from_texts(
|
||||
cls: Type[AtlasDB],
|
||||
cls,
|
||||
texts: List[str],
|
||||
embedding: Optional[Embeddings] = None,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
@@ -270,7 +270,7 @@ class AtlasDB(VectorStore):
|
||||
|
||||
@classmethod
|
||||
def from_documents(
|
||||
cls: Type[AtlasDB],
|
||||
cls,
|
||||
documents: List[Document],
|
||||
embedding: Optional[Embeddings] = None,
|
||||
ids: Optional[List[str]] = None,
|
||||
|
||||
@@ -1,10 +1,8 @@
|
||||
"""Interface for vector stores."""
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from abc import ABC, abstractmethod
|
||||
from functools import partial
|
||||
from typing import Any, Dict, Iterable, List, Optional, Type, TypeVar
|
||||
from typing import Any, Dict, Iterable, List, Optional
|
||||
|
||||
from pydantic import BaseModel, Field, root_validator
|
||||
|
||||
@@ -12,8 +10,6 @@ from langchain.docstore.document import Document
|
||||
from langchain.embeddings.base import Embeddings
|
||||
from langchain.schema import BaseRetriever
|
||||
|
||||
VST = TypeVar("VST", bound="VectorStore")
|
||||
|
||||
|
||||
class VectorStore(ABC):
|
||||
"""Interface for vector stores."""
|
||||
@@ -85,12 +81,7 @@ class VectorStore(ABC):
|
||||
self, query: str, k: int = 4, **kwargs: Any
|
||||
) -> List[Document]:
|
||||
"""Return docs most similar to query."""
|
||||
|
||||
# This is a temporary workaround to make the similarity search
|
||||
# asynchronous. The proper solution is to make the similarity search
|
||||
# asynchronous in the vector store implementations.
|
||||
func = partial(self.similarity_search, query, k, **kwargs)
|
||||
return await asyncio.get_event_loop().run_in_executor(None, func)
|
||||
raise NotImplementedError
|
||||
|
||||
def similarity_search_by_vector(
|
||||
self, embedding: List[float], k: int = 4, **kwargs: Any
|
||||
@@ -110,12 +101,7 @@ class VectorStore(ABC):
|
||||
self, embedding: List[float], k: int = 4, **kwargs: Any
|
||||
) -> List[Document]:
|
||||
"""Return docs most similar to embedding vector."""
|
||||
|
||||
# This is a temporary workaround to make the similarity search
|
||||
# asynchronous. The proper solution is to make the similarity search
|
||||
# asynchronous in the vector store implementations.
|
||||
func = partial(self.similarity_search_by_vector, embedding, k, **kwargs)
|
||||
return await asyncio.get_event_loop().run_in_executor(None, func)
|
||||
raise NotImplementedError
|
||||
|
||||
def max_marginal_relevance_search(
|
||||
self, query: str, k: int = 4, fetch_k: int = 20
|
||||
@@ -139,12 +125,7 @@ class VectorStore(ABC):
|
||||
self, query: str, k: int = 4, fetch_k: int = 20
|
||||
) -> List[Document]:
|
||||
"""Return docs selected using the maximal marginal relevance."""
|
||||
|
||||
# This is a temporary workaround to make the similarity search
|
||||
# asynchronous. The proper solution is to make the similarity search
|
||||
# asynchronous in the vector store implementations.
|
||||
func = partial(self.max_marginal_relevance_search, query, k, fetch_k)
|
||||
return await asyncio.get_event_loop().run_in_executor(None, func)
|
||||
raise NotImplementedError
|
||||
|
||||
def max_marginal_relevance_search_by_vector(
|
||||
self, embedding: List[float], k: int = 4, fetch_k: int = 20
|
||||
@@ -172,11 +153,11 @@ class VectorStore(ABC):
|
||||
|
||||
@classmethod
|
||||
def from_documents(
|
||||
cls: Type[VST],
|
||||
cls,
|
||||
documents: List[Document],
|
||||
embedding: Embeddings,
|
||||
**kwargs: Any,
|
||||
) -> VST:
|
||||
) -> VectorStore:
|
||||
"""Return VectorStore initialized from documents and embeddings."""
|
||||
texts = [d.page_content for d in documents]
|
||||
metadatas = [d.metadata for d in documents]
|
||||
@@ -184,11 +165,11 @@ class VectorStore(ABC):
|
||||
|
||||
@classmethod
|
||||
async def afrom_documents(
|
||||
cls: Type[VST],
|
||||
cls,
|
||||
documents: List[Document],
|
||||
embedding: Embeddings,
|
||||
**kwargs: Any,
|
||||
) -> VST:
|
||||
) -> VectorStore:
|
||||
"""Return VectorStore initialized from documents and embeddings."""
|
||||
texts = [d.page_content for d in documents]
|
||||
metadatas = [d.metadata for d in documents]
|
||||
@@ -197,22 +178,22 @@ class VectorStore(ABC):
|
||||
@classmethod
|
||||
@abstractmethod
|
||||
def from_texts(
|
||||
cls: Type[VST],
|
||||
cls,
|
||||
texts: List[str],
|
||||
embedding: Embeddings,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
**kwargs: Any,
|
||||
) -> VST:
|
||||
) -> VectorStore:
|
||||
"""Return VectorStore initialized from texts and embeddings."""
|
||||
|
||||
@classmethod
|
||||
async def afrom_texts(
|
||||
cls: Type[VST],
|
||||
cls,
|
||||
texts: List[str],
|
||||
embedding: Embeddings,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
**kwargs: Any,
|
||||
) -> VST:
|
||||
) -> VectorStore:
|
||||
"""Return VectorStore initialized from texts and embeddings."""
|
||||
raise NotImplementedError
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@ from __future__ import annotations
|
||||
|
||||
import logging
|
||||
import uuid
|
||||
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple, Type
|
||||
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Tuple
|
||||
|
||||
import numpy as np
|
||||
|
||||
@@ -269,7 +269,7 @@ class Chroma(VectorStore):
|
||||
|
||||
@classmethod
|
||||
def from_texts(
|
||||
cls: Type[Chroma],
|
||||
cls,
|
||||
texts: List[str],
|
||||
embedding: Optional[Embeddings] = None,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
@@ -307,7 +307,7 @@ class Chroma(VectorStore):
|
||||
|
||||
@classmethod
|
||||
def from_documents(
|
||||
cls: Type[Chroma],
|
||||
cls,
|
||||
documents: List[Document],
|
||||
embedding: Optional[Embeddings] = None,
|
||||
ids: Optional[List[str]] = None,
|
||||
|
||||
@@ -1,10 +1,7 @@
|
||||
"""VectorStore wrapper around a Postgres/PGVector database."""
|
||||
from __future__ import annotations
|
||||
|
||||
import enum
|
||||
import logging
|
||||
import uuid
|
||||
from typing import Any, Dict, Iterable, List, Optional, Tuple, Type
|
||||
from typing import Any, Dict, Iterable, List, Optional, Tuple
|
||||
|
||||
import sqlalchemy
|
||||
from pgvector.sqlalchemy import Vector
|
||||
@@ -349,7 +346,7 @@ class PGVector(VectorStore):
|
||||
|
||||
@classmethod
|
||||
def from_texts(
|
||||
cls: Type[PGVector],
|
||||
cls,
|
||||
texts: List[str],
|
||||
embedding: Embeddings,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
@@ -358,7 +355,7 @@ class PGVector(VectorStore):
|
||||
ids: Optional[List[str]] = None,
|
||||
pre_delete_collection: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> PGVector:
|
||||
) -> "PGVector":
|
||||
"""
|
||||
Return VectorStore initialized from texts and embeddings.
|
||||
Postgres connection string is required
|
||||
@@ -398,7 +395,7 @@ class PGVector(VectorStore):
|
||||
|
||||
@classmethod
|
||||
def from_documents(
|
||||
cls: Type[PGVector],
|
||||
cls,
|
||||
documents: List[Document],
|
||||
embedding: Embeddings,
|
||||
collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME,
|
||||
@@ -406,7 +403,7 @@ class PGVector(VectorStore):
|
||||
ids: Optional[List[str]] = None,
|
||||
pre_delete_collection: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> PGVector:
|
||||
) -> "PGVector":
|
||||
"""
|
||||
Return VectorStore initialized from documents and embeddings.
|
||||
Postgres connection string is required
|
||||
|
||||
@@ -1,9 +1,7 @@
|
||||
"""Wrapper around Qdrant vector database."""
|
||||
from __future__ import annotations
|
||||
|
||||
import uuid
|
||||
from operator import itemgetter
|
||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type, Union
|
||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union, cast
|
||||
|
||||
from langchain.docstore.document import Document
|
||||
from langchain.embeddings.base import Embeddings
|
||||
@@ -21,7 +19,6 @@ class Qdrant(VectorStore):
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
from qdrant_client import QdrantClient
|
||||
from langchain import Qdrant
|
||||
|
||||
client = QdrantClient()
|
||||
@@ -126,7 +123,7 @@ class Qdrant(VectorStore):
|
||||
filter: Filter by metadata. Defaults to None.
|
||||
|
||||
Returns:
|
||||
List of Documents most similar to the query and score for each.
|
||||
List of Documents most similar to the query and score for each
|
||||
"""
|
||||
embedding = self.embedding_function(query)
|
||||
results = self.client.search(
|
||||
@@ -158,7 +155,6 @@ class Qdrant(VectorStore):
|
||||
query: Text to look up documents similar to.
|
||||
k: Number of Documents to return. Defaults to 4.
|
||||
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
|
||||
Defaults to 20.
|
||||
|
||||
Returns:
|
||||
List of Documents selected by maximal marginal relevance.
|
||||
@@ -180,9 +176,55 @@ class Qdrant(VectorStore):
|
||||
for i in mmr_selected
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def from_documents(
|
||||
cls,
|
||||
documents: List[Document],
|
||||
embedding: Embeddings,
|
||||
location: Optional[str] = None,
|
||||
url: Optional[str] = None,
|
||||
port: Optional[int] = 6333,
|
||||
grpc_port: int = 6334,
|
||||
prefer_grpc: bool = False,
|
||||
https: Optional[bool] = None,
|
||||
api_key: Optional[str] = None,
|
||||
prefix: Optional[str] = None,
|
||||
timeout: Optional[float] = None,
|
||||
host: Optional[str] = None,
|
||||
path: Optional[str] = None,
|
||||
collection_name: Optional[str] = None,
|
||||
distance_func: str = "Cosine",
|
||||
content_payload_key: str = CONTENT_KEY,
|
||||
metadata_payload_key: str = METADATA_KEY,
|
||||
**kwargs: Any,
|
||||
) -> "Qdrant":
|
||||
return cast(
|
||||
Qdrant,
|
||||
super().from_documents(
|
||||
documents,
|
||||
embedding,
|
||||
location=location,
|
||||
url=url,
|
||||
port=port,
|
||||
grpc_port=grpc_port,
|
||||
prefer_grpc=prefer_grpc,
|
||||
https=https,
|
||||
api_key=api_key,
|
||||
prefix=prefix,
|
||||
timeout=timeout,
|
||||
host=host,
|
||||
path=path,
|
||||
collection_name=collection_name,
|
||||
distance_func=distance_func,
|
||||
content_payload_key=content_payload_key,
|
||||
metadata_payload_key=metadata_payload_key,
|
||||
**kwargs,
|
||||
),
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def from_texts(
|
||||
cls: Type[Qdrant],
|
||||
cls,
|
||||
texts: List[str],
|
||||
embedding: Embeddings,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
@@ -202,8 +244,8 @@ class Qdrant(VectorStore):
|
||||
content_payload_key: str = CONTENT_KEY,
|
||||
metadata_payload_key: str = METADATA_KEY,
|
||||
**kwargs: Any,
|
||||
) -> Qdrant:
|
||||
"""Construct Qdrant wrapper from a list of texts.
|
||||
) -> "Qdrant":
|
||||
"""Construct Qdrant wrapper from raw documents.
|
||||
|
||||
Args:
|
||||
texts: A list of texts to be indexed in Qdrant.
|
||||
@@ -214,50 +256,45 @@ class Qdrant(VectorStore):
|
||||
location:
|
||||
If `:memory:` - use in-memory Qdrant instance.
|
||||
If `str` - use it as a `url` parameter.
|
||||
If `None` - fallback to relying on `host` and `port` parameters.
|
||||
If `None` - use default values for `host` and `port`.
|
||||
url: either host or str of "Optional[scheme], host, Optional[port],
|
||||
Optional[prefix]". Default: `None`
|
||||
port: Port of the REST API interface. Default: 6333
|
||||
grpc_port: Port of the gRPC interface. Default: 6334
|
||||
prefer_grpc:
|
||||
If true - use gPRC interface whenever possible in custom methods.
|
||||
Default: False
|
||||
https: If true - use HTTPS(SSL) protocol. Default: None
|
||||
api_key: API key for authentication in Qdrant Cloud. Default: None
|
||||
If `true` - use gPRC interface whenever possible in custom methods.
|
||||
https: If `true` - use HTTPS(SSL) protocol. Default: `None`
|
||||
api_key: API key for authentication in Qdrant Cloud. Default: `None`
|
||||
prefix:
|
||||
If not None - add prefix to the REST URL path.
|
||||
Example: service/v1 will result in
|
||||
http://localhost:6333/service/v1/{qdrant-endpoint} for REST API.
|
||||
Default: None
|
||||
If not `None` - add `prefix` to the REST URL path.
|
||||
Example: `service/v1` will result in
|
||||
`http://localhost:6333/service/v1/{qdrant-endpoint}` for REST API.
|
||||
Default: `None`
|
||||
timeout:
|
||||
Timeout for REST and gRPC API requests.
|
||||
Default: 5.0 seconds for REST and unlimited for gRPC
|
||||
host:
|
||||
Host name of Qdrant service. If url and host are None, set to
|
||||
'localhost'. Default: None
|
||||
'localhost'. Default: `None`
|
||||
path:
|
||||
Path in which the vectors will be stored while using local mode.
|
||||
Default: None
|
||||
Default: `None`
|
||||
collection_name:
|
||||
Name of the Qdrant collection to be used. If not provided,
|
||||
it will be created randomly. Default: None
|
||||
will be created randomly.
|
||||
distance_func:
|
||||
Distance function. One of: "Cosine" / "Euclid" / "Dot".
|
||||
Default: "Cosine"
|
||||
Distance function. One of the: "Cosine" / "Euclid" / "Dot".
|
||||
content_payload_key:
|
||||
A payload key used to store the content of the document.
|
||||
Default: "page_content"
|
||||
metadata_payload_key:
|
||||
A payload key used to store the metadata of the document.
|
||||
Default: "metadata"
|
||||
**kwargs:
|
||||
Additional arguments passed directly into REST client initialization
|
||||
|
||||
This is a user friendly interface that:
|
||||
1. Creates embeddings, one for each text
|
||||
2. Initializes the Qdrant database as an in-memory docstore by default
|
||||
(and overridable to a remote docstore)
|
||||
3. Adds the text embeddings to the Qdrant database
|
||||
1. Embeds documents.
|
||||
2. Creates an in memory docstore
|
||||
3. Initializes the Qdrant database
|
||||
|
||||
This is intended to be a quick way to get started.
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ from __future__ import annotations
|
||||
import json
|
||||
import logging
|
||||
import uuid
|
||||
from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Tuple, Type
|
||||
from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Tuple
|
||||
|
||||
import numpy as np
|
||||
from pydantic import BaseModel, root_validator
|
||||
@@ -227,7 +227,7 @@ class Redis(VectorStore):
|
||||
|
||||
@classmethod
|
||||
def from_texts(
|
||||
cls: Type[Redis],
|
||||
cls,
|
||||
texts: List[str],
|
||||
embedding: Embeddings,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""Wrapper around weaviate vector database."""
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Dict, Iterable, List, Optional, Type
|
||||
from typing import Any, Dict, Iterable, List, Optional
|
||||
from uuid import uuid4
|
||||
|
||||
from langchain.docstore.document import Document
|
||||
@@ -83,8 +83,6 @@ class Weaviate(VectorStore):
|
||||
content["certainty"] = kwargs.get("search_distance")
|
||||
query_obj = self._client.query.get(self._index_name, self._query_attrs)
|
||||
result = query_obj.with_near_text(content).with_limit(k).do()
|
||||
if "errors" in result:
|
||||
raise ValueError(f"Error during query: {result['errors']}")
|
||||
docs = []
|
||||
for res in result["data"]["Get"][self._index_name]:
|
||||
text = res.pop(self._text_key)
|
||||
@@ -98,8 +96,6 @@ class Weaviate(VectorStore):
|
||||
vector = {"vector": embedding}
|
||||
query_obj = self._client.query.get(self._index_name, self._query_attrs)
|
||||
result = query_obj.with_near_vector(vector).with_limit(k).do()
|
||||
if "errors" in result:
|
||||
raise ValueError(f"Error during query: {result['errors']}")
|
||||
docs = []
|
||||
for res in result["data"]["Get"][self._index_name]:
|
||||
text = res.pop(self._text_key)
|
||||
@@ -108,11 +104,11 @@ class Weaviate(VectorStore):
|
||||
|
||||
@classmethod
|
||||
def from_texts(
|
||||
cls: Type[Weaviate],
|
||||
cls,
|
||||
texts: List[str],
|
||||
embedding: Embeddings,
|
||||
metadatas: Optional[List[dict]] = None,
|
||||
**kwargs: Any,
|
||||
) -> Weaviate:
|
||||
) -> VectorStore:
|
||||
"""Not implemented for Weaviate yet."""
|
||||
raise NotImplementedError("weaviate does not currently support `from_texts`.")
|
||||
|
||||
524
poetry.lock
generated
524
poetry.lock
generated
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 1.4.2 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "absl-py"
|
||||
@@ -562,7 +562,7 @@ name = "backoff"
|
||||
version = "2.2.1"
|
||||
description = "Function decoration for backoff and retry"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7,<4.0"
|
||||
files = [
|
||||
{file = "backoff-2.2.1-py3-none-any.whl", hash = "sha256:63579f9a0628e06278f7e47b7d7d5b6ce20dc65c5e96a6f3ca99a6adca0396e8"},
|
||||
@@ -936,31 +936,6 @@ files = [
|
||||
{file = "charset_normalizer-3.1.0-py3-none-any.whl", hash = "sha256:3d9098b479e78c85080c98e1e35ff40b4a31d8953102bb0fd7d1b6f8a2111a3d"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "chromadb"
|
||||
version = "0.3.21"
|
||||
description = "Chroma."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "chromadb-0.3.21-py3-none-any.whl", hash = "sha256:b497516ef403d357944742b2363eb729019d68ec0d1a7062a6abe8e127ccf28f"},
|
||||
{file = "chromadb-0.3.21.tar.gz", hash = "sha256:7b3417892666dc90df10eafae719ee189037c448c1c96e6c7964daa870483c3a"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
clickhouse-connect = ">=0.5.7"
|
||||
duckdb = ">=0.7.1"
|
||||
fastapi = ">=0.85.1"
|
||||
hnswlib = ">=0.7"
|
||||
numpy = ">=1.21.6"
|
||||
pandas = ">=1.3"
|
||||
posthog = ">=2.4.0"
|
||||
pydantic = ">=1.9"
|
||||
requests = ">=2.28"
|
||||
sentence-transformers = ">=2.2.2"
|
||||
uvicorn = {version = ">=0.18.3", extras = ["standard"]}
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "8.1.3"
|
||||
@@ -976,96 +951,6 @@ files = [
|
||||
[package.dependencies]
|
||||
colorama = {version = "*", markers = "platform_system == \"Windows\""}
|
||||
|
||||
[[package]]
|
||||
name = "clickhouse-connect"
|
||||
version = "0.5.20"
|
||||
description = "ClickHouse core driver, SqlAlchemy, and Superset libraries"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "~=3.7"
|
||||
files = [
|
||||
{file = "clickhouse-connect-0.5.20.tar.gz", hash = "sha256:5fc9a84849f3c3b6f6928b45a0df17fa63ebcf4e518b3a48ec70720957e18683"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c29cf8b2c90eed6b83366c13ab5ad471ff6ef2e334f35818729330854b9747ac"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5c03ded1b006fa2cf8f7d823f0ff9c6d294e442a123c96ca2a9ebc4b293bfb7f"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9eb0024160412d9c6079fa6982cb29abda4db8412b4f63918de7a1bde1dcb7aa"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:170bd258d21bc828557f8a55f23affe22cc4e671c93f645a6316ef874e359f8e"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:dc70fee875fdba42c0a6f519fa376659a08253fd36d188b8b304f4ccda572177"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:18837e06846797db475b6aee13f03928fb169f64d0efb268e2bb04e015990b5b"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:76f7a7d2d41377e6f382a7ada825be594c2d316481f3194bfffd025727633258"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:3bac453f1199af29ec7292d2fd2a8cb0cc0e6692bec9c9da50ce5aec10ff0339"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-win32.whl", hash = "sha256:14983562d2687b18d03a35f27b4e7f28cf013c280ff4fee726501e03bae7528d"},
|
||||
{file = "clickhouse_connect-0.5.20-cp310-cp310-win_amd64.whl", hash = "sha256:3d618a9c15ee4d2facc7a79e59a646262da64e6ec39d2a1ac6a68167d52266bf"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6bdfb74ba2bf5157230f576e16c7d708f20ffa7e4b19c54288d7db2b55ebcd17"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:fce7e54ad14b732479c5630948324f7088c3092a74a2442bf015a7cab4bc0a41"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1e6a2b6d123f5de362d49f079c509a0a43cfbaecae0130c860706ef738af12b7"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7a9391128387013755de8e420bb7e17c6c809f77ca3233fdc966a1df023fa85d"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1df976816913675b46134e8dd9dee2cf315cc4bf42e258211f8036099b8fc280"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:f1ddeb651bc75b87ec5fa1fbe17fe3a589d00f42cad76d6e64918067f5025798"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:caf60b4bfb7214d80455137eee45ca0943a370885d65f4298fafde0d431e837a"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:5c0bdcb72607244dc920f543ee6363a6094e836770aaac07f20556936af85813"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-win32.whl", hash = "sha256:cc3f77df2b1cab2aa99b59f529aead2cc96beac1639ed18f7fd8dba392957623"},
|
||||
{file = "clickhouse_connect-0.5.20-cp311-cp311-win_amd64.whl", hash = "sha256:e44c3b7e40402ce0650f69cbc31f2f503073e2bec9f2b31befbd823150f2431d"},
|
||||
{file = "clickhouse_connect-0.5.20-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:ba78e7d270d78f9559e4a836c6c4f55ab54d9f2b6505c0d05db6260e8e2a4f6a"},
|
||||
{file = "clickhouse_connect-0.5.20-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3e8924824cd19b739cc920d867bf291a31a5da406637e0c575f6eb961cfb0557"},
|
||||
{file = "clickhouse_connect-0.5.20-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:672c260c471fd18a87a4f5130e6d72590cd4f57289669c58feff5be934810d28"},
|
||||
{file = "clickhouse_connect-0.5.20-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:69887898f8f5ea6e70c30aa51c756f8a752ef0eb1df747d4aec7b7d10de5e103"},
|
||||
{file = "clickhouse_connect-0.5.20-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:c4da55465a52e0e440772e289e6959cc6acbb2efa0561a7ea4f9a7108159958d"},
|
||||
{file = "clickhouse_connect-0.5.20-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:2087b64ab47969e603cd9735e7c0433bdf15c6d83025abd00c50ca9a617ed39b"},
|
||||
{file = "clickhouse_connect-0.5.20-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:28b72cabb1d4fc3f04392ed1f654bd925b6c950305869971186f73b2d13d835a"},
|
||||
{file = "clickhouse_connect-0.5.20-cp37-cp37m-win32.whl", hash = "sha256:a481e13216de227aa624449f5f6ead9e51fe7c8f18bbd783c41e4b396919fa08"},
|
||||
{file = "clickhouse_connect-0.5.20-cp37-cp37m-win_amd64.whl", hash = "sha256:c1dc77bdc15240d6d4d375e098c77403aeabbc6f8b1c2ce524f4389a5d8c6d74"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:4fe527b6b4306cad58dde934493d5f018166f78f5914f6abf6ed93750ca7ecbd"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:c07b9ca21d302e843aa8c031ef15f85c86280c5730858edfe4eeb952d3991d1d"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:71e427b3cd1f611bcb8315ea9bc17f0329329ca21043f1a5ef068e2903457b9b"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9319037b437c8d1297b00d8bc3f92239cc2296db409b5bfc2ff22b05c5f3a26f"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d8c3c533fd2baff653dc40e7b88ca86ce9b8d0923c34fb33ce5ce1d1b7370fe6"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:c850bc0cf5a00bd144202a6926b646baa60fb4e6c449b62d46c230c548ec760a"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:632922c90cd71fcb8e1b7e6e2a9b4487dee2e67b91846dc1778cfd9d5198d047"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:a6c7733b5754ea048bd7928b0cce6625d71c709570c97f1819ba36054850d915"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-win32.whl", hash = "sha256:738b35e061a3c665e9a099a3b5cb50338bed89a6eee3ce29190cd525a1bc1892"},
|
||||
{file = "clickhouse_connect-0.5.20-cp38-cp38-win_amd64.whl", hash = "sha256:58da16eac95126d441f106d27c8e3ae931fcc784f263d7d916b5a8086bdcf757"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b9c57f6958021ec0b22eabaa02e567df3ff5f85fdfd9d052e3aface655bdf3d1"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e9c9a2de183a85fc32ef70973cfad5c9af2a8d73733aa30b9523c1400b813c13"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50fd663b132c4edc1fc5dae33c5cbd2538dd2e0c94bd9fff5e98ca3ca12059a2"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:26a98b165fa2c8420e5219db244f0790b13f401a0932c6a7d5e5c1a959a26b80"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9686bd02a16e3b6cbf976b2476e54bc7caaf1a95fd129fd44b2692d082dfcef6"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:d01a51871dde0cd0d24efafd61ab27c57293a0456a26ec7e8a5a585623239ab1"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:2c1096ebad10964fcdd646f41228accf182d24b066cefd18d9b33f021e3017cd"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:1f0407cc9ea9d2cf51edfe59993c536c256ae54c40c6b36fb7f738edd48f51b5"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-win32.whl", hash = "sha256:184f7c119c9725b25ecaa3011420de8dc06530999653508a983b27c90894146c"},
|
||||
{file = "clickhouse_connect-0.5.20-cp39-cp39-win_amd64.whl", hash = "sha256:f7d2cbde4543cccddef8465afed221f81095eec3d3b763d7570c22ae99819ab4"},
|
||||
{file = "clickhouse_connect-0.5.20-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7f83a6e61b9832fc9184bf67e3f7bc041f3b940c066b8162bfadf02aa484b1c4"},
|
||||
{file = "clickhouse_connect-0.5.20-pp37-pypy37_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c61b22a7038553813a8f5432cd3b1e57b6d94c629d599d775f57c64c4700a5df"},
|
||||
{file = "clickhouse_connect-0.5.20-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fbae752fadbd9fa9390f2246c5ce6e75a91225d03adb3451beb49bd3f1ea48f0"},
|
||||
{file = "clickhouse_connect-0.5.20-pp37-pypy37_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b9da5c94be2255d6e07e255899411a5e009723f331d90359e5b21c66e8007630"},
|
||||
{file = "clickhouse_connect-0.5.20-pp37-pypy37_pp73-win_amd64.whl", hash = "sha256:205a3dc992548891150d42856e418398d135d9dfa5f30f53bb7c3633d6b449d0"},
|
||||
{file = "clickhouse_connect-0.5.20-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:5e0c42adc692f2fb285f5f898d166cf4ed9b5779e5f3effab8f612cd3362f004"},
|
||||
{file = "clickhouse_connect-0.5.20-pp38-pypy38_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e8a2d9dfbfd7c3075f5d1c7011e32b5b62853000d16f93684fa69d8b8979a04"},
|
||||
{file = "clickhouse_connect-0.5.20-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2f8bb09db27aba694193073137bd69f8404e53c2ee80f2dbf41c829c081175a"},
|
||||
{file = "clickhouse_connect-0.5.20-pp38-pypy38_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:52e07d91e3bcaf3989d698a4d9ad9b36f1dcf357673cc4c44a6663ab78581066"},
|
||||
{file = "clickhouse_connect-0.5.20-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:7832b2c4c4c4b316258bd078b54a82c84aeccd62c917eb986059de738b13b56b"},
|
||||
{file = "clickhouse_connect-0.5.20-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:2e7dad00ce8df847f896c50aa9644c685259a995a15823fec788348e736fb893"},
|
||||
{file = "clickhouse_connect-0.5.20-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:34b6c4f16d8b4c5c458504da64e87fb2ec1390640ed7345bf051cfbba18526f4"},
|
||||
{file = "clickhouse_connect-0.5.20-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85ce3896158cbac451253bc3632140920a57bb775a82d68370de9ace97ce96a8"},
|
||||
{file = "clickhouse_connect-0.5.20-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:65f1e552c4efdab1937ff824f062561fe0b6901044ea06b373a35c8a1a679cea"},
|
||||
{file = "clickhouse_connect-0.5.20-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:05d1cfd70fd90b5d7cdb4e93d603d34f74d34327811e8f573fbbd87838cfd4a3"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
certifi = "*"
|
||||
lz4 = "*"
|
||||
pytz = "*"
|
||||
urllib3 = ">=1.26"
|
||||
zstandard = "*"
|
||||
|
||||
[package.extras]
|
||||
arrow = ["pyarrow"]
|
||||
numpy = ["numpy"]
|
||||
orjson = ["orjson"]
|
||||
pandas = ["pandas"]
|
||||
sqlalchemy = ["sqlalchemy (>1.3.21,<1.4)"]
|
||||
superset = ["apache-superset (>=1.4.1)"]
|
||||
|
||||
[[package]]
|
||||
name = "cohere"
|
||||
version = "3.10.0"
|
||||
@@ -1716,7 +1601,7 @@ name = "fastapi"
|
||||
version = "0.95.0"
|
||||
description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "fastapi-0.95.0-py3-none-any.whl", hash = "sha256:daf73bbe844180200be7966f68e8ec9fd8be57079dff1bacb366db32729e6eb5"},
|
||||
@@ -2062,23 +1947,6 @@ protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.1 || >4.21.1,<4
|
||||
[package.extras]
|
||||
grpc = ["grpcio (>=1.44.0,<2.0.0dev)"]
|
||||
|
||||
[[package]]
|
||||
name = "gptcache"
|
||||
version = "0.1.8"
|
||||
description = "GPT Cache, a powerful caching library that can be used to speed up and lower the cost of chat applications that rely on the LLM service. GPT Cache works as a memcache for AIGC applications, similar to how Redis works for traditional applications."
|
||||
category = "main"
|
||||
optional = true
|
||||
python-versions = ">=3.8.1"
|
||||
files = [
|
||||
{file = "gptcache-0.1.8-py3-none-any.whl", hash = "sha256:953662291819471e5461920c89367084f905237a8506f1a1605729f3e633f147"},
|
||||
{file = "gptcache-0.1.8.tar.gz", hash = "sha256:23200cc0783776210cce85a588ae68222d522ce9456f74b7836945ebe8b15820"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cachetools = "*"
|
||||
numpy = "*"
|
||||
openai = "*"
|
||||
|
||||
[[package]]
|
||||
name = "greenlet"
|
||||
version = "2.0.1"
|
||||
@@ -2313,7 +2181,7 @@ name = "h11"
|
||||
version = "0.14.0"
|
||||
description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "h11-0.14.0-py3-none-any.whl", hash = "sha256:e3fe4ac4b851c468cc8363d500db52c2ead036020723024a109d37346efaa761"},
|
||||
@@ -2374,20 +2242,6 @@ files = [
|
||||
[package.dependencies]
|
||||
numpy = ">=1.14.5"
|
||||
|
||||
[[package]]
|
||||
name = "hnswlib"
|
||||
version = "0.7.0"
|
||||
description = "hnswlib"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "hnswlib-0.7.0.tar.gz", hash = "sha256:bc459668e7e44bb7454b256b90c98c5af750653919d9a91698dafcf416cf64c4"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
numpy = "*"
|
||||
|
||||
[[package]]
|
||||
name = "hpack"
|
||||
version = "4.0.0"
|
||||
@@ -2402,14 +2256,14 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "httpcore"
|
||||
version = "0.17.0"
|
||||
version = "0.16.3"
|
||||
description = "A minimal low-level HTTP client."
|
||||
category = "main"
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "httpcore-0.17.0-py3-none-any.whl", hash = "sha256:0fdfea45e94f0c9fd96eab9286077f9ff788dd186635ae61b312693e4d943599"},
|
||||
{file = "httpcore-0.17.0.tar.gz", hash = "sha256:cc045a3241afbf60ce056202301b4d8b6af08845e3294055eb26b09913ef903c"},
|
||||
{file = "httpcore-0.16.3-py3-none-any.whl", hash = "sha256:da1fb708784a938aa084bde4feb8317056c55037247c787bd7e19eb2c2949dc0"},
|
||||
{file = "httpcore-0.16.3.tar.gz", hash = "sha256:c5d6f04e2fc530f39e0c077e6a30caa53f1451096120f1f38b954afd0b17c0cb"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2442,7 +2296,7 @@ name = "httptools"
|
||||
version = "0.5.0"
|
||||
description = "A collection of framework independent HTTP protocol utils."
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.5.0"
|
||||
files = [
|
||||
{file = "httptools-0.5.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8f470c79061599a126d74385623ff4744c4e0f4a0997a353a44923c0b561ee51"},
|
||||
@@ -2493,26 +2347,26 @@ test = ["Cython (>=0.29.24,<0.30.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "httpx"
|
||||
version = "0.24.0"
|
||||
version = "0.23.3"
|
||||
description = "The next generation HTTP client."
|
||||
category = "main"
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "httpx-0.24.0-py3-none-any.whl", hash = "sha256:447556b50c1921c351ea54b4fe79d91b724ed2b027462ab9a329465d147d5a4e"},
|
||||
{file = "httpx-0.24.0.tar.gz", hash = "sha256:507d676fc3e26110d41df7d35ebd8b3b8585052450f4097401c9be59d928c63e"},
|
||||
{file = "httpx-0.23.3-py3-none-any.whl", hash = "sha256:a211fcce9b1254ea24f0cd6af9869b3d29aba40154e947d2a07bb499b3e310d6"},
|
||||
{file = "httpx-0.23.3.tar.gz", hash = "sha256:9818458eb565bb54898ccb9b8b251a28785dd4a55afbc23d0eb410754fe7d0f9"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
certifi = "*"
|
||||
h2 = {version = ">=3,<5", optional = true, markers = "extra == \"http2\""}
|
||||
httpcore = ">=0.15.0,<0.18.0"
|
||||
idna = "*"
|
||||
httpcore = ">=0.15.0,<0.17.0"
|
||||
rfc3986 = {version = ">=1.3,<2", extras = ["idna2008"]}
|
||||
sniffio = "*"
|
||||
|
||||
[package.extras]
|
||||
brotli = ["brotli", "brotlicffi"]
|
||||
cli = ["click (>=8.0.0,<9.0.0)", "pygments (>=2.0.0,<3.0.0)", "rich (>=10,<14)"]
|
||||
cli = ["click (>=8.0.0,<9.0.0)", "pygments (>=2.0.0,<3.0.0)", "rich (>=10,<13)"]
|
||||
http2 = ["h2 (>=3,<5)"]
|
||||
socks = ["socksio (>=1.0.0,<2.0.0)"]
|
||||
|
||||
@@ -3013,7 +2867,7 @@ name = "joblib"
|
||||
version = "1.2.0"
|
||||
description = "Lightweight pipelining with Python functions"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "joblib-1.2.0-py3-none-any.whl", hash = "sha256:091138ed78f800342968c523bdde947e7a305b8594b910a0fea2ab83c3c6d385"},
|
||||
@@ -3380,14 +3234,14 @@ tornado = {version = "*", markers = "python_version > \"2.7\""}
|
||||
|
||||
[[package]]
|
||||
name = "loguru"
|
||||
version = "0.7.0"
|
||||
version = "0.6.0"
|
||||
description = "Python logging made (stupidly) simple"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.5"
|
||||
files = [
|
||||
{file = "loguru-0.7.0-py3-none-any.whl", hash = "sha256:b93aa30099fa6860d4727f1b81f8718e965bb96253fa190fab2077aaad6d15d3"},
|
||||
{file = "loguru-0.7.0.tar.gz", hash = "sha256:1612053ced6ae84d7959dd7d5e431a0532642237ec21f7fd83ac73fe539e03e1"},
|
||||
{file = "loguru-0.6.0-py3-none-any.whl", hash = "sha256:4e2414d534a2ab57573365b3e6d0234dfb1d84b68b7f3b948e6fb743860a77c3"},
|
||||
{file = "loguru-0.6.0.tar.gz", hash = "sha256:066bd06758d0a513e9836fd9c6b5a75bfb3fd36841f4b996bc60b547a309d41c"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -3395,57 +3249,7 @@ colorama = {version = ">=0.3.4", markers = "sys_platform == \"win32\""}
|
||||
win32-setctime = {version = ">=1.0.0", markers = "sys_platform == \"win32\""}
|
||||
|
||||
[package.extras]
|
||||
dev = ["Sphinx (==5.3.0)", "colorama (==0.4.5)", "colorama (==0.4.6)", "freezegun (==1.1.0)", "freezegun (==1.2.2)", "mypy (==v0.910)", "mypy (==v0.971)", "mypy (==v0.990)", "pre-commit (==3.2.1)", "pytest (==6.1.2)", "pytest (==7.2.1)", "pytest-cov (==2.12.1)", "pytest-cov (==4.0.0)", "pytest-mypy-plugins (==1.10.1)", "pytest-mypy-plugins (==1.9.3)", "sphinx-autobuild (==2021.3.14)", "sphinx-rtd-theme (==1.2.0)", "tox (==3.27.1)", "tox (==4.4.6)"]
|
||||
|
||||
[[package]]
|
||||
name = "lz4"
|
||||
version = "4.3.2"
|
||||
description = "LZ4 Bindings for Python"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "lz4-4.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1c4c100d99eed7c08d4e8852dd11e7d1ec47a3340f49e3a96f8dfbba17ffb300"},
|
||||
{file = "lz4-4.3.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:edd8987d8415b5dad25e797043936d91535017237f72fa456601be1479386c92"},
|
||||
{file = "lz4-4.3.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f7c50542b4ddceb74ab4f8b3435327a0861f06257ca501d59067a6a482535a77"},
|
||||
{file = "lz4-4.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f5614d8229b33d4a97cb527db2a1ac81308c6e796e7bdb5d1309127289f69d5"},
|
||||
{file = "lz4-4.3.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8f00a9ba98f6364cadda366ae6469b7b3568c0cced27e16a47ddf6b774169270"},
|
||||
{file = "lz4-4.3.2-cp310-cp310-win32.whl", hash = "sha256:b10b77dc2e6b1daa2f11e241141ab8285c42b4ed13a8642495620416279cc5b2"},
|
||||
{file = "lz4-4.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:86480f14a188c37cb1416cdabacfb4e42f7a5eab20a737dac9c4b1c227f3b822"},
|
||||
{file = "lz4-4.3.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:7c2df117def1589fba1327dceee51c5c2176a2b5a7040b45e84185ce0c08b6a3"},
|
||||
{file = "lz4-4.3.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:1f25eb322eeb24068bb7647cae2b0732b71e5c639e4e4026db57618dcd8279f0"},
|
||||
{file = "lz4-4.3.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8df16c9a2377bdc01e01e6de5a6e4bbc66ddf007a6b045688e285d7d9d61d1c9"},
|
||||
{file = "lz4-4.3.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f571eab7fec554d3b1db0d666bdc2ad85c81f4b8cb08906c4c59a8cad75e6e22"},
|
||||
{file = "lz4-4.3.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7211dc8f636ca625abc3d4fb9ab74e5444b92df4f8d58ec83c8868a2b0ff643d"},
|
||||
{file = "lz4-4.3.2-cp311-cp311-win32.whl", hash = "sha256:867664d9ca9bdfce840ac96d46cd8838c9ae891e859eb98ce82fcdf0e103a947"},
|
||||
{file = "lz4-4.3.2-cp311-cp311-win_amd64.whl", hash = "sha256:a6a46889325fd60b8a6b62ffc61588ec500a1883db32cddee9903edfba0b7584"},
|
||||
{file = "lz4-4.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:3a85b430138882f82f354135b98c320dafb96fc8fe4656573d95ab05de9eb092"},
|
||||
{file = "lz4-4.3.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65d5c93f8badacfa0456b660285e394e65023ef8071142e0dcbd4762166e1be0"},
|
||||
{file = "lz4-4.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6b50f096a6a25f3b2edca05aa626ce39979d63c3b160687c8c6d50ac3943d0ba"},
|
||||
{file = "lz4-4.3.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:200d05777d61ba1ff8d29cb51c534a162ea0b4fe6d3c28be3571a0a48ff36080"},
|
||||
{file = "lz4-4.3.2-cp37-cp37m-win32.whl", hash = "sha256:edc2fb3463d5d9338ccf13eb512aab61937be50aa70734bcf873f2f493801d3b"},
|
||||
{file = "lz4-4.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:83acfacab3a1a7ab9694333bcb7950fbeb0be21660d236fd09c8337a50817897"},
|
||||
{file = "lz4-4.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:7a9eec24ec7d8c99aab54de91b4a5a149559ed5b3097cf30249b665689b3d402"},
|
||||
{file = "lz4-4.3.2-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:31d72731c4ac6ebdce57cd9a5cabe0aecba229c4f31ba3e2c64ae52eee3fdb1c"},
|
||||
{file = "lz4-4.3.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:83903fe6db92db0be101acedc677aa41a490b561567fe1b3fe68695b2110326c"},
|
||||
{file = "lz4-4.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:926b26db87ec8822cf1870efc3d04d06062730ec3279bbbd33ba47a6c0a5c673"},
|
||||
{file = "lz4-4.3.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e05afefc4529e97c08e65ef92432e5f5225c0bb21ad89dee1e06a882f91d7f5e"},
|
||||
{file = "lz4-4.3.2-cp38-cp38-win32.whl", hash = "sha256:ad38dc6a7eea6f6b8b642aaa0683253288b0460b70cab3216838747163fb774d"},
|
||||
{file = "lz4-4.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:7e2dc1bd88b60fa09b9b37f08553f45dc2b770c52a5996ea52b2b40f25445676"},
|
||||
{file = "lz4-4.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:edda4fb109439b7f3f58ed6bede59694bc631c4b69c041112b1b7dc727fffb23"},
|
||||
{file = "lz4-4.3.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0ca83a623c449295bafad745dcd399cea4c55b16b13ed8cfea30963b004016c9"},
|
||||
{file = "lz4-4.3.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d5ea0e788dc7e2311989b78cae7accf75a580827b4d96bbaf06c7e5a03989bd5"},
|
||||
{file = "lz4-4.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a98b61e504fb69f99117b188e60b71e3c94469295571492a6468c1acd63c37ba"},
|
||||
{file = "lz4-4.3.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:4931ab28a0d1c133104613e74eec1b8bb1f52403faabe4f47f93008785c0b929"},
|
||||
{file = "lz4-4.3.2-cp39-cp39-win32.whl", hash = "sha256:ec6755cacf83f0c5588d28abb40a1ac1643f2ff2115481089264c7630236618a"},
|
||||
{file = "lz4-4.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:4caedeb19e3ede6c7a178968b800f910db6503cb4cb1e9cc9221157572139b49"},
|
||||
{file = "lz4-4.3.2.tar.gz", hash = "sha256:e1431d84a9cfb23e6773e72078ce8e65cad6745816d4cbf9ae67da5ea419acda"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
docs = ["sphinx (>=1.6.0)", "sphinx-bootstrap-theme"]
|
||||
flake8 = ["flake8"]
|
||||
tests = ["psutil", "pytest (!=3.3.0)", "pytest-cov"]
|
||||
dev = ["Sphinx (>=4.1.1)", "black (>=19.10b0)", "colorama (>=0.3.4)", "docutils (==0.16)", "flake8 (>=3.7.7)", "isort (>=5.1.1)", "pytest (>=4.6.2)", "pytest-cov (>=2.7.1)", "sphinx-autobuild (>=0.7.1)", "sphinx-rtd-theme (>=0.4.3)", "tox (>=3.9.0)"]
|
||||
|
||||
[[package]]
|
||||
name = "manifest-ml"
|
||||
@@ -3666,63 +3470,6 @@ files = [
|
||||
{file = "mistune-2.0.5.tar.gz", hash = "sha256:0246113cb2492db875c6be56974a7c893333bf26cd92891c85f63151cee09d34"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "mmh3"
|
||||
version = "3.1.0"
|
||||
description = "Python wrapper for MurmurHash (MurmurHash3), a set of fast and robust hash functions."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "mmh3-3.1.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:16ee043b1bac040b4324b8baee39df9fdca480a560a6d74f2eef66a5009a234e"},
|
||||
{file = "mmh3-3.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:04ac865319e5b36148a4b6cdf27f8bda091c47c4ab7b355d7f353dfc2b8a3cce"},
|
||||
{file = "mmh3-3.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9e751f5433417a21c2060b0efa1afc67cfbe29977c867336148c8edb086fae70"},
|
||||
{file = "mmh3-3.1.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:bdb863b89c1b34e3681d4a3b15d424734940eb8036f3457cb35ef34fb87a503c"},
|
||||
{file = "mmh3-3.1.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1230930fbf2faec4ddf5b76d0768ae73c102de173c301962bdd468177275adf9"},
|
||||
{file = "mmh3-3.1.0-cp310-cp310-win32.whl", hash = "sha256:b8ed7a2361718795a1b519a08d05f44947a20b27e202b53946561a00dde669c1"},
|
||||
{file = "mmh3-3.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:29e878e7467a000f34ab68c218ad7ad81312c0a94bc10df3c50a48bcad39dd83"},
|
||||
{file = "mmh3-3.1.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c271472325b70d64a4fbb1f2e964ca5b093ac10258e1390f8408890b065868fe"},
|
||||
{file = "mmh3-3.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:0109320f7e0e262123ff4f1acd06acfbc8b3bf19cc13d98c0bc369264430aaeb"},
|
||||
{file = "mmh3-3.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:524e29dfe66499695f9496edcfc96782d130aabd6ba12c50c72372163cc6f3ea"},
|
||||
{file = "mmh3-3.1.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:66bdb06a03074e65e614da1aa199b1d16c90608bec9d8fc3faa81d887ffe93cc"},
|
||||
{file = "mmh3-3.1.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2a4d471eb75df8320061ab3b8cbe11c970be9f116b01bc2222ebda9c0a777520"},
|
||||
{file = "mmh3-3.1.0-cp311-cp311-win32.whl", hash = "sha256:a886d9ce995a4bdfd7a600ddf61b9015cccbc73c50b898f8ff3c78af24384710"},
|
||||
{file = "mmh3-3.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:5edb5ac882c04aff8a2a18ae8b74a0c339ac9b83db9820d8456f518bb558e0d8"},
|
||||
{file = "mmh3-3.1.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:190fd10981fbd6c67e10ce3b56bcc021562c0df0fee2e2864347d64e65b1783a"},
|
||||
{file = "mmh3-3.1.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cd781b115cf649811cfde76368c33d2e553b6f88bb41131c314f30d8e65e9d24"},
|
||||
{file = "mmh3-3.1.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f48bb0a867077acc1f548591ad49506389f36d18f36dccd10becf071e5cbdda4"},
|
||||
{file = "mmh3-3.1.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d0936a82438e340636a11b9a938378870fc1c7a139632dac09a9a9277351704"},
|
||||
{file = "mmh3-3.1.0-cp37-cp37m-win32.whl", hash = "sha256:d196cc035c2238493248522ae4e54c3cb790549b1564f6dea4d88dfe4b326313"},
|
||||
{file = "mmh3-3.1.0-cp37-cp37m-win_amd64.whl", hash = "sha256:731d37f089b6c212fab1beea24e673161146eb6c76baf9ac074a3424d1172d41"},
|
||||
{file = "mmh3-3.1.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:9977fb81f8c66f4eee8439734a18dba7826fe78723d15ab53f42db977005be0f"},
|
||||
{file = "mmh3-3.1.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:bf4f3f20a8b8405c08b13bc9e4ac33bf55129b50b535cd07ce1891b7f96326ac"},
|
||||
{file = "mmh3-3.1.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87cdbc6e70099ad92f17a28b4054ffb1938657e8fb7c1e4e03b194a1b4683fd6"},
|
||||
{file = "mmh3-3.1.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6dd81321d14f62aa3711f30533c85a74dc7596e0fee63c8eddd375bc92ab846c"},
|
||||
{file = "mmh3-3.1.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2e6eba88e5c1a2778f3de00a9502e3c214ebb757337ece2a7d71e060d188ddfa"},
|
||||
{file = "mmh3-3.1.0-cp38-cp38-win32.whl", hash = "sha256:d91e696925f208d28f3bb7bdf29815524ce955248276af256519bd3538c411ce"},
|
||||
{file = "mmh3-3.1.0-cp38-cp38-win_amd64.whl", hash = "sha256:cbc2917df568aeb86ec5aa863bfb20fa14e01039cbdce7650efbabc30960df49"},
|
||||
{file = "mmh3-3.1.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3b22832d565128be83d69f5d49243bb567840a954df377c9f5b26646a6eec39b"},
|
||||
{file = "mmh3-3.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:ced92a0e285a9111413541c197b0c17d280cee96f7c564b258caf5de5ab8ee01"},
|
||||
{file = "mmh3-3.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f906833753b4ddcb690c2c1b74e77725868bc3a8b762b7a77737d08be89ae41d"},
|
||||
{file = "mmh3-3.1.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:72b5685832a7a87a55ebff481794bc410484d7bd4c5e80dae4d8ac50739138ef"},
|
||||
{file = "mmh3-3.1.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d2aa4d422c7c088bbc5d367b45431268ebe6742a0a64eade93fab708e25757c"},
|
||||
{file = "mmh3-3.1.0-cp39-cp39-win32.whl", hash = "sha256:4459bec818f534dc8378568ad89ab310ff47cda3e00ab322edce48dd899bba32"},
|
||||
{file = "mmh3-3.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:03e04b3480e71828f48d17653451a3286555f0534942cb6ba93065b10ad5f9dc"},
|
||||
{file = "mmh3-3.1.0.tar.gz", hash = "sha256:9b0f2b2ab4a915333c9d1089572e290a021ebb5b900bb7f7114dccc03995d732"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "monotonic"
|
||||
version = "1.6"
|
||||
description = "An implementation of time.monotonic() for Python 2 & < 3.3"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "monotonic-1.6-py2.py3-none-any.whl", hash = "sha256:68687e19a14f11f26d140dd5c86f3dba4bf5df58003000ed467e0e2a69bca96c"},
|
||||
{file = "monotonic-1.6.tar.gz", hash = "sha256:3a55207bcfed53ddd5c5bae174524062935efed17792e9de2ad0205ce9ad63f7"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "more-itertools"
|
||||
version = "9.1.0"
|
||||
@@ -4063,14 +3810,14 @@ test = ["black", "check-manifest", "flake8", "ipykernel", "ipython (<8.0.0)", "i
|
||||
|
||||
[[package]]
|
||||
name = "nbconvert"
|
||||
version = "7.3.1"
|
||||
version = "7.3.0"
|
||||
description = "Converting Jupyter Notebooks"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "nbconvert-7.3.1-py3-none-any.whl", hash = "sha256:d2e95904666f1ff77d36105b9de4e0801726f93b862d5b28f69e93d99ad3b19c"},
|
||||
{file = "nbconvert-7.3.1.tar.gz", hash = "sha256:78685362b11d2e8058e70196fe83b09abed8df22d3e599cf271f4d39fdc48b9e"},
|
||||
{file = "nbconvert-7.3.0-py3-none-any.whl", hash = "sha256:8983a83d0b083d56b076019f0a319f63bc16af70c9372892b86a0aab0a264b1d"},
|
||||
{file = "nbconvert-7.3.0.tar.gz", hash = "sha256:b970a13aba97529c223d805dd0706c2fe04dfc05e250ad4e6f7ae33daf6fede1"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -4193,7 +3940,7 @@ name = "nltk"
|
||||
version = "3.8.1"
|
||||
description = "Natural Language Toolkit"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "nltk-3.8.1-py3-none-any.whl", hash = "sha256:fd5c9109f976fa86bcadba8f91e47f5e9293bd034474752e92a520f81c93dda5"},
|
||||
@@ -4371,7 +4118,7 @@ name = "nvidia-cublas-cu11"
|
||||
version = "11.10.3.66"
|
||||
description = "CUBLAS native runtime libraries"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3"
|
||||
files = [
|
||||
{file = "nvidia_cublas_cu11-11.10.3.66-py3-none-manylinux1_x86_64.whl", hash = "sha256:d32e4d75f94ddfb93ea0a5dda08389bcc65d8916a25cb9f37ac89edaeed3bded"},
|
||||
@@ -4387,7 +4134,7 @@ name = "nvidia-cuda-nvrtc-cu11"
|
||||
version = "11.7.99"
|
||||
description = "NVRTC native runtime libraries"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3"
|
||||
files = [
|
||||
{file = "nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl", hash = "sha256:9f1562822ea264b7e34ed5930567e89242d266448e936b85bc97a3370feabb03"},
|
||||
@@ -4404,7 +4151,7 @@ name = "nvidia-cuda-runtime-cu11"
|
||||
version = "11.7.99"
|
||||
description = "CUDA Runtime native Libraries"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3"
|
||||
files = [
|
||||
{file = "nvidia_cuda_runtime_cu11-11.7.99-py3-none-manylinux1_x86_64.whl", hash = "sha256:cc768314ae58d2641f07eac350f40f99dcb35719c4faff4bc458a7cd2b119e31"},
|
||||
@@ -4420,7 +4167,7 @@ name = "nvidia-cudnn-cu11"
|
||||
version = "8.5.0.96"
|
||||
description = "cuDNN runtime libraries"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3"
|
||||
files = [
|
||||
{file = "nvidia_cudnn_cu11-8.5.0.96-2-py3-none-manylinux1_x86_64.whl", hash = "sha256:402f40adfc6f418f9dae9ab402e773cfed9beae52333f6d86ae3107a1b9527e7"},
|
||||
@@ -5100,26 +4847,6 @@ urllib3 = ">=1.21.1"
|
||||
[package.extras]
|
||||
grpc = ["googleapis-common-protos (>=1.53.0)", "grpc-gateway-protoc-gen-openapiv2 (==0.1.0)", "grpcio (>=1.44.0)", "lz4 (>=3.1.3)", "protobuf (==3.19.3)"]
|
||||
|
||||
[[package]]
|
||||
name = "pinecone-text"
|
||||
version = "0.4.2"
|
||||
description = "Text utilities library by Pinecone.io"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.8,<4.0"
|
||||
files = [
|
||||
{file = "pinecone_text-0.4.2-py3-none-any.whl", hash = "sha256:79468c197b2fc7738c1511a6b5b8e7697fad613604ad935661a438f621ad2004"},
|
||||
{file = "pinecone_text-0.4.2.tar.gz", hash = "sha256:131d9d1cc5654bdff8c4e497bb00e54fcab07a3b501e38aa16a6f19c2f00d4c6"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
mmh3 = ">=3.1.0,<4.0.0"
|
||||
nltk = ">=3.6.5,<4.0.0"
|
||||
sentence-transformers = ">=2.0.0,<3.0.0"
|
||||
torch = ">=1.13.1,<2.0.0"
|
||||
transformers = ">=4.26.1,<5.0.0"
|
||||
wget = ">=3.2,<4.0"
|
||||
|
||||
[[package]]
|
||||
name = "pkgutil-resolve-name"
|
||||
version = "1.3.10"
|
||||
@@ -5186,30 +4913,6 @@ files = [
|
||||
dev = ["pre-commit", "tox"]
|
||||
testing = ["pytest", "pytest-benchmark"]
|
||||
|
||||
[[package]]
|
||||
name = "posthog"
|
||||
version = "2.4.2"
|
||||
description = "Integrate PostHog into any python application."
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "posthog-2.4.2-py2.py3-none-any.whl", hash = "sha256:8c7c37de997d955aea61bf0aa0069970e71f0d9d79c9e6b3a134e6593d5aa3d6"},
|
||||
{file = "posthog-2.4.2.tar.gz", hash = "sha256:652a628623aab26597e8421a7ddf9caaf19dd93cc1288901a6b23db9693d34e5"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
backoff = ">=1.10.0"
|
||||
monotonic = ">=1.5"
|
||||
python-dateutil = ">2.1"
|
||||
requests = ">=2.7,<3.0"
|
||||
six = ">=1.5"
|
||||
|
||||
[package.extras]
|
||||
dev = ["black", "flake8", "flake8-print", "isort", "pre-commit"]
|
||||
sentry = ["django", "sentry-sdk"]
|
||||
test = ["coverage", "flake8", "freezegun (==0.3.15)", "mock (>=2.0.0)", "pylint", "pytest"]
|
||||
|
||||
[[package]]
|
||||
name = "pox"
|
||||
version = "0.3.2"
|
||||
@@ -5709,14 +5412,14 @@ typing-extensions = "*"
|
||||
|
||||
[[package]]
|
||||
name = "pygments"
|
||||
version = "2.15.0"
|
||||
version = "2.14.0"
|
||||
description = "Pygments is a syntax highlighting package written in Python."
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "Pygments-2.15.0-py3-none-any.whl", hash = "sha256:77a3299119af881904cd5ecd1ac6a66214b6e9bed1f2db16993b54adede64094"},
|
||||
{file = "Pygments-2.15.0.tar.gz", hash = "sha256:f7e36cffc4c517fbc252861b9a6e4644ca0e5abadf9a113c72d1358ad09b9500"},
|
||||
{file = "Pygments-2.14.0-py3-none-any.whl", hash = "sha256:fa7bd7bd2771287c0de303af8bfdfc731f51bd2c6a47ab69d117138893b82717"},
|
||||
{file = "Pygments-2.14.0.tar.gz", hash = "sha256:b3ed06a9e8ac9a9aae5a6f5dbe78a8a58655d17b43b93c078f094ddc476ae297"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -5777,14 +5480,14 @@ diagrams = ["jinja2", "railroad-diagrams"]
|
||||
|
||||
[[package]]
|
||||
name = "pypdf"
|
||||
version = "3.7.1"
|
||||
version = "3.7.0"
|
||||
description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files"
|
||||
category = "main"
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "pypdf-3.7.1-py3-none-any.whl", hash = "sha256:fa780c9464ec3b49fd16dabd110a40a291439bc6edd0f21f302add63c1f5ade5"},
|
||||
{file = "pypdf-3.7.1.tar.gz", hash = "sha256:dfb61fcccd4bc6d321aae612c01924b3c953aa5857e6e39d31e24dbb9b49da13"},
|
||||
{file = "pypdf-3.7.0-py3-none-any.whl", hash = "sha256:b50c2d3c807af2f75c945b7bdd8f8bb01d513a0c25d6b66bf299b9fad1cbc91c"},
|
||||
{file = "pypdf-3.7.0.tar.gz", hash = "sha256:da98eb41428b26f5ab23561cc125eedff450147598d6b6159e62943edc0008fe"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -5849,17 +5552,18 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "pytest"
|
||||
version = "7.3.0"
|
||||
version = "7.2.2"
|
||||
description = "pytest: simple powerful testing with Python"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "pytest-7.3.0-py3-none-any.whl", hash = "sha256:933051fa1bfbd38a21e73c3960cebdad4cf59483ddba7696c48509727e17f201"},
|
||||
{file = "pytest-7.3.0.tar.gz", hash = "sha256:58ecc27ebf0ea643ebfdf7fb1249335da761a00c9f955bcd922349bcb68ee57d"},
|
||||
{file = "pytest-7.2.2-py3-none-any.whl", hash = "sha256:130328f552dcfac0b1cec75c12e3f005619dc5f874f0a06e8ff7263f0ee6225e"},
|
||||
{file = "pytest-7.2.2.tar.gz", hash = "sha256:c99ab0c73aceb050f68929bc93af19ab6db0558791c6a0715723abe9d0ade9d4"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
attrs = ">=19.2.0"
|
||||
colorama = {version = "*", markers = "sys_platform == \"win32\""}
|
||||
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
|
||||
iniconfig = "*"
|
||||
@@ -5868,7 +5572,7 @@ pluggy = ">=0.12,<2.0"
|
||||
tomli = {version = ">=1.0.0", markers = "python_version < \"3.11\""}
|
||||
|
||||
[package.extras]
|
||||
testing = ["argcomplete", "attrs (>=19.2.0)", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
|
||||
testing = ["argcomplete", "hypothesis (>=3.56)", "mock", "nose", "pygments (>=2.7.2)", "requests", "xmlschema"]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-asyncio"
|
||||
@@ -6228,14 +5932,14 @@ cffi = {version = "*", markers = "implementation_name == \"pypy\""}
|
||||
|
||||
[[package]]
|
||||
name = "qdrant-client"
|
||||
version = "1.1.3"
|
||||
version = "1.1.2"
|
||||
description = "Client library for the Qdrant vector search engine"
|
||||
category = "main"
|
||||
optional = true
|
||||
python-versions = ">=3.7,<3.12"
|
||||
files = [
|
||||
{file = "qdrant_client-1.1.3-py3-none-any.whl", hash = "sha256:c95f59fb9e3e89d163517f8992ee4557eccb45c252147e11e45c608ef1c7dd29"},
|
||||
{file = "qdrant_client-1.1.3.tar.gz", hash = "sha256:2b7de2b987fc456c643a06878a4150947c3d3d6c6515f6c29f6e707788daa6e7"},
|
||||
{file = "qdrant_client-1.1.2-py3-none-any.whl", hash = "sha256:e722aa76af3d4db1f52bc49857f1e2398cbb89afafac1a7b7f21eda424c72faf"},
|
||||
{file = "qdrant_client-1.1.2.tar.gz", hash = "sha256:708b7a6291dfeeeaa8c5ac2e61a0f73b61fa66b45e31a568e3f406ab08393100"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -6459,6 +6163,24 @@ files = [
|
||||
[package.dependencies]
|
||||
six = "*"
|
||||
|
||||
[[package]]
|
||||
name = "rfc3986"
|
||||
version = "1.5.0"
|
||||
description = "Validating URI References per RFC 3986"
|
||||
category = "main"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "rfc3986-1.5.0-py2.py3-none-any.whl", hash = "sha256:a86d6e1f5b1dc238b218b012df0aa79409667bb209e58da56d0b94704e712a97"},
|
||||
{file = "rfc3986-1.5.0.tar.gz", hash = "sha256:270aaf10d87d0d4e095063c65bf3ddbc6ee3d0b226328ce21e036f946e421835"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
idna = {version = "*", optional = true, markers = "extra == \"idna2008\""}
|
||||
|
||||
[package.extras]
|
||||
idna2008 = ["idna"]
|
||||
|
||||
[[package]]
|
||||
name = "rfc3986-validator"
|
||||
version = "0.1.1"
|
||||
@@ -6556,7 +6278,7 @@ name = "scikit-learn"
|
||||
version = "1.2.2"
|
||||
description = "A set of python modules for machine learning and data mining"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "scikit-learn-1.2.2.tar.gz", hash = "sha256:8429aea30ec24e7a8c7ed8a3fa6213adf3814a6efbea09e16e0a0c71e1a1a3d7"},
|
||||
@@ -6599,7 +6321,7 @@ name = "scipy"
|
||||
version = "1.9.3"
|
||||
description = "Fundamental algorithms for scientific computing in Python"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "scipy-1.9.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1884b66a54887e21addf9c16fb588720a8309a57b2e258ae1c7986d4444d3bc0"},
|
||||
@@ -6655,7 +6377,7 @@ name = "sentence-transformers"
|
||||
version = "2.2.2"
|
||||
description = "Multilingual text embeddings"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.6.0"
|
||||
files = [
|
||||
{file = "sentence-transformers-2.2.2.tar.gz", hash = "sha256:dbc60163b27de21076c9a30d24b5b7b6fa05141d68cf2553fa9a77bf79a29136"},
|
||||
@@ -6678,7 +6400,7 @@ name = "sentencepiece"
|
||||
version = "0.1.97"
|
||||
description = "SentencePiece python wrapper"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "sentencepiece-0.1.97-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:6f249c8f1852893be86eae66b19d522c5fb30bbad4fe2d1b07f06fdc86e1907e"},
|
||||
@@ -7234,7 +6956,7 @@ files = [
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
greenlet = {version = "!=0.4.17", markers = "python_version >= \"3\" and (platform_machine == \"aarch64\" or platform_machine == \"ppc64le\" or platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"AMD64\" or platform_machine == \"win32\" or platform_machine == \"WIN32\")"}
|
||||
greenlet = {version = "!=0.4.17", markers = "python_version >= \"3\" and platform_machine == \"aarch64\" or python_version >= \"3\" and platform_machine == \"ppc64le\" or python_version >= \"3\" and platform_machine == \"x86_64\" or python_version >= \"3\" and platform_machine == \"amd64\" or python_version >= \"3\" and platform_machine == \"AMD64\" or python_version >= \"3\" and platform_machine == \"win32\" or python_version >= \"3\" and platform_machine == \"WIN32\""}
|
||||
|
||||
[package.extras]
|
||||
aiomysql = ["aiomysql", "greenlet (!=0.4.17)"]
|
||||
@@ -7334,7 +7056,7 @@ name = "starlette"
|
||||
version = "0.26.1"
|
||||
description = "The little ASGI library that shines."
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "starlette-0.26.1-py3-none-any.whl", hash = "sha256:e87fce5d7cbdde34b76f0ac69013fd9d190d581d80681493016666e6f96c6d5e"},
|
||||
@@ -7716,7 +7438,7 @@ name = "threadpoolctl"
|
||||
version = "3.1.0"
|
||||
description = "threadpoolctl"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "threadpoolctl-3.1.0-py3-none-any.whl", hash = "sha256:8b99adda265feb6773280df41eece7b2e6561b772d21ffd52e372f999024907b"},
|
||||
@@ -7728,7 +7450,7 @@ name = "tiktoken"
|
||||
version = "0.3.3"
|
||||
description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "tiktoken-0.3.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d1f37fa75ba70c1bc7806641e8ccea1fba667d23e6341a1591ea333914c226a9"},
|
||||
@@ -7872,7 +7594,7 @@ name = "torch"
|
||||
version = "1.13.1"
|
||||
description = "Tensors and Dynamic neural networks in Python with strong GPU acceleration"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7.0"
|
||||
files = [
|
||||
{file = "torch-1.13.1-cp310-cp310-manylinux1_x86_64.whl", hash = "sha256:fd12043868a34a8da7d490bf6db66991108b00ffbeecb034228bfcbbd4197143"},
|
||||
@@ -7913,7 +7635,7 @@ name = "torchvision"
|
||||
version = "0.14.1"
|
||||
description = "image and video datasets and models for torch deep learning"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "torchvision-0.14.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:eeb05dd9dd3af5428fee525400759daf8da8e4caec45ddd6908cfb36571f6433"},
|
||||
@@ -8265,7 +7987,7 @@ name = "uvicorn"
|
||||
version = "0.21.1"
|
||||
description = "The lightning-fast ASGI server."
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "uvicorn-0.21.1-py3-none-any.whl", hash = "sha256:e47cac98a6da10cd41e6fd036d472c6f58ede6c5dbee3dbee3ef7a100ed97742"},
|
||||
@@ -8291,7 +8013,7 @@ name = "uvloop"
|
||||
version = "0.17.0"
|
||||
description = "Fast implementation of asyncio event loop on top of libuv"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "uvloop-0.17.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:ce9f61938d7155f79d3cb2ffa663147d4a76d16e08f65e2c66b77bd41b356718"},
|
||||
@@ -8333,13 +8055,13 @@ test = ["Cython (>=0.29.32,<0.30.0)", "aiohttp", "flake8 (>=3.9.2,<3.10.0)", "my
|
||||
|
||||
[[package]]
|
||||
name = "validators"
|
||||
version = "0.20.0"
|
||||
version = "0.19.0"
|
||||
description = "Python Data Validation for Humans™."
|
||||
category = "main"
|
||||
optional = true
|
||||
python-versions = ">=3.4"
|
||||
files = [
|
||||
{file = "validators-0.20.0.tar.gz", hash = "sha256:24148ce4e64100a2d5e267233e23e7afeb55316b47d30faae7eb6e7292bc226a"},
|
||||
{file = "validators-0.19.0.tar.gz", hash = "sha256:dec45f4381f042f1e705cfa74949505b77f1e27e8b05409096fee8152c839cbe"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -8426,7 +8148,7 @@ name = "watchfiles"
|
||||
version = "0.19.0"
|
||||
description = "Simple, modern and high performance file watching and code reload in python."
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "watchfiles-0.19.0-cp37-abi3-macosx_10_7_x86_64.whl", hash = "sha256:91633e64712df3051ca454ca7d1b976baf842d7a3640b87622b323c55f3345e7"},
|
||||
@@ -8470,21 +8192,21 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "weaviate-client"
|
||||
version = "3.15.5"
|
||||
version = "3.15.4"
|
||||
description = "A python native weaviate client"
|
||||
category = "main"
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "weaviate-client-3.15.5.tar.gz", hash = "sha256:6da7e5d08dc9bb8b7879661d1a457c50af7d73e621a5305efe131160e83da69e"},
|
||||
{file = "weaviate_client-3.15.5-py3-none-any.whl", hash = "sha256:24d0be614e5494534e758cc67a45e7e15f3929a89bf512afd642de53d08723c7"},
|
||||
{file = "weaviate-client-3.15.4.tar.gz", hash = "sha256:5e61ebffefbedf62b0751d7de562ffd5384717c8ee6adfca4ea6eb150d012e1c"},
|
||||
{file = "weaviate_client-3.15.4-py3-none-any.whl", hash = "sha256:e765b2f434d2a4301ad8d63052833ab7708d0ef430033496e3e7020ef72c9da0"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
authlib = ">=1.1.0"
|
||||
requests = ">=2.28.0,<2.29.0"
|
||||
tqdm = ">=4.59.0,<5.0.0"
|
||||
validators = ">=0.18.2,<=0.21.0"
|
||||
validators = ">=0.18.2,<0.20.0"
|
||||
|
||||
[[package]]
|
||||
name = "webcolors"
|
||||
@@ -8536,7 +8258,7 @@ name = "websockets"
|
||||
version = "11.0.1"
|
||||
description = "An implementation of the WebSocket Protocol (RFC 6455 & 7692)"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "websockets-11.0.1-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:3d30cc1a90bcbf9e22e1f667c1c5a7428e2d37362288b4ebfd5118eb0b11afa9"},
|
||||
@@ -8629,23 +8351,12 @@ MarkupSafe = ">=2.1.1"
|
||||
[package.extras]
|
||||
watchdog = ["watchdog"]
|
||||
|
||||
[[package]]
|
||||
name = "wget"
|
||||
version = "3.2"
|
||||
description = "pure python download utility"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "wget-3.2.zip", hash = "sha256:35e630eca2aa50ce998b9b1a127bb26b30dfee573702782aa982f875e3f16061"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wheel"
|
||||
version = "0.40.0"
|
||||
description = "A built-package format for Python"
|
||||
category = "main"
|
||||
optional = false
|
||||
optional = true
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "wheel-0.40.0-py3-none-any.whl", hash = "sha256:d236b20e7cb522daf2390fa84c55eea81c5c30190f90f29ae2ca1ad8355bf247"},
|
||||
@@ -8934,81 +8645,14 @@ files = [
|
||||
docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-lint"]
|
||||
testing = ["big-O", "flake8 (<5)", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)"]
|
||||
|
||||
[[package]]
|
||||
name = "zstandard"
|
||||
version = "0.20.0"
|
||||
description = "Zstandard bindings for Python"
|
||||
category = "dev"
|
||||
optional = false
|
||||
python-versions = ">=3.6"
|
||||
files = [
|
||||
{file = "zstandard-0.20.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c4efa051799703dc37c072e22af1f0e4c77069a78fb37caf70e26414c738ca1d"},
|
||||
{file = "zstandard-0.20.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f847701d77371d90783c0ce6cfdb7ebde4053882c2aaba7255c70ae3c3eb7af0"},
|
||||
{file = "zstandard-0.20.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0aa4d178560d7ee32092ddfd415c2cdc6ab5ddce9554985c75f1a019a0ff4c55"},
|
||||
{file = "zstandard-0.20.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0488f2a238b4560828b3a595f3337daac4d3725c2a1637ffe2a0d187c091da59"},
|
||||
{file = "zstandard-0.20.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:cd0aa9a043c38901925ae1bba49e1e638f2d9c3cdf1b8000868993c642deb7f2"},
|
||||
{file = "zstandard-0.20.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cdd769da7add8498658d881ce0eeb4c35ea1baac62e24c5a030c50f859f29724"},
|
||||
{file = "zstandard-0.20.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:9aea3c7bab4276212e5ac63d28e6bd72a79ff058d57e06926dfe30a52451d943"},
|
||||
{file = "zstandard-0.20.0-cp310-cp310-win32.whl", hash = "sha256:0d213353d58ad37fb5070314b156fb983b4d680ed5f3fce76ab013484cf3cf12"},
|
||||
{file = "zstandard-0.20.0-cp310-cp310-win_amd64.whl", hash = "sha256:d08459f7f7748398a6cc65eb7f88aa7ef5731097be2ddfba544be4b558acd900"},
|
||||
{file = "zstandard-0.20.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c1929afea64da48ec59eca9055d7ec7e5955801489ac40ac2a19dde19e7edad9"},
|
||||
{file = "zstandard-0.20.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b6d718f1b7cd30adb02c2a46dde0f25a84a9de8865126e0fff7d0162332d6b92"},
|
||||
{file = "zstandard-0.20.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5499d65d4a1978dccf0a9c2c0d12415e16d4995ffad7a0bc4f72cc66691cf9f2"},
|
||||
{file = "zstandard-0.20.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:302a31400de0280f17c4ce67a73444a7a069f228db64048e4ce555cd0c02fbc4"},
|
||||
{file = "zstandard-0.20.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:39ae788dcdc404c07ef7aac9b11925185ea0831b985db0bbc43f95acdbd1c2ce"},
|
||||
{file = "zstandard-0.20.0-cp311-cp311-win32.whl", hash = "sha256:e3f6887d2bdfb5752d5544860bd6b778e53ebfaf4ab6c3f9d7fd388445429d41"},
|
||||
{file = "zstandard-0.20.0-cp311-cp311-win_amd64.whl", hash = "sha256:4abf9a9e0841b844736d1ae8ead2b583d2cd212815eab15391b702bde17477a7"},
|
||||
{file = "zstandard-0.20.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:dc47cec184e66953f635254e5381df8a22012a2308168c069230b1a95079ccd0"},
|
||||
{file = "zstandard-0.20.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:84c1dae0c0a21eea245b5691286fe6470dc797d5e86e0c26b57a3afd1e750b48"},
|
||||
{file = "zstandard-0.20.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:059316f07e39b7214cd9eed565d26ab239035d2c76835deeff381995f7a27ba8"},
|
||||
{file = "zstandard-0.20.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:9aca916724d0802d3e70dc68adeff893efece01dffe7252ee3ae0053f1f1990f"},
|
||||
{file = "zstandard-0.20.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b07f391fd85e3d07514c05fb40c5573b398d0063ab2bada6eb09949ec6004772"},
|
||||
{file = "zstandard-0.20.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:2adf65cfce73ce94ef4c482f6cc01f08ddf5e1ca0c1ec95f2b63840f9e4c226c"},
|
||||
{file = "zstandard-0.20.0-cp36-cp36m-win32.whl", hash = "sha256:ee2a1510e06dfc7706ea9afad363efe222818a1eafa59abc32d9bbcd8465fba7"},
|
||||
{file = "zstandard-0.20.0-cp36-cp36m-win_amd64.whl", hash = "sha256:29699746fae2760d3963a4ffb603968e77da55150ee0a3326c0569f4e35f319f"},
|
||||
{file = "zstandard-0.20.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:78fb35d07423f25efd0fc90d0d4710ae83cfc86443a32192b0c6cb8475ec79a5"},
|
||||
{file = "zstandard-0.20.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40466adfa071f58bfa448d90f9623d6aff67c6d86de6fc60be47a26388f6c74d"},
|
||||
{file = "zstandard-0.20.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ba86f931bf925e9561ccd6cb978acb163e38c425990927feb38be10c894fa937"},
|
||||
{file = "zstandard-0.20.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:b671b75ae88139b1dd022fa4aa66ba419abd66f98869af55a342cb9257a1831e"},
|
||||
{file = "zstandard-0.20.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cc98c8bcaa07150d3f5d7c4bd264eaa4fdd4a4dfb8fd3f9d62565ae5c4aba227"},
|
||||
{file = "zstandard-0.20.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:0b815dec62e2d5a1bf7a373388f2616f21a27047b9b999de328bca7462033708"},
|
||||
{file = "zstandard-0.20.0-cp37-cp37m-win32.whl", hash = "sha256:5a3578b182c21b8af3c49619eb4cd0b9127fa60791e621b34217d65209722002"},
|
||||
{file = "zstandard-0.20.0-cp37-cp37m-win_amd64.whl", hash = "sha256:f1ba6bbd28ad926d130f0af8016f3a2930baa013c2128cfff46ca76432f50669"},
|
||||
{file = "zstandard-0.20.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:b0f556c74c6f0f481b61d917e48c341cdfbb80cc3391511345aed4ce6fb52fdc"},
|
||||
{file = "zstandard-0.20.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:862ad0a5c94670f2bd6f64fff671bd2045af5f4ed428a3f2f69fa5e52483f86a"},
|
||||
{file = "zstandard-0.20.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a56036c08645aa6041d435a50103428f0682effdc67f5038de47cea5e4221d6f"},
|
||||
{file = "zstandard-0.20.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4af5d1891eebef430038ea4981957d31b1eb70aca14b906660c3ac1c3e7a8612"},
|
||||
{file = "zstandard-0.20.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:489959e2d52f7f1fe8ea275fecde6911d454df465265bf3ec51b3e755e769a5e"},
|
||||
{file = "zstandard-0.20.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7041efe3a93d0975d2ad16451720932e8a3d164be8521bfd0873b27ac917b77a"},
|
||||
{file = "zstandard-0.20.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:c28c7441638c472bfb794f424bd560a22c7afce764cd99196e8d70fbc4d14e85"},
|
||||
{file = "zstandard-0.20.0-cp38-cp38-win32.whl", hash = "sha256:ba4bb4c5a0cac802ff485fa1e57f7763df5efa0ad4ee10c2693ecc5a018d2c1a"},
|
||||
{file = "zstandard-0.20.0-cp38-cp38-win_amd64.whl", hash = "sha256:a5efe366bf0545a1a5a917787659b445ba16442ae4093f102204f42a9da1ecbc"},
|
||||
{file = "zstandard-0.20.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:79c3058ccbe1fa37356a73c9d3c0475ec935ab528f5b76d56fc002a5a23407c7"},
|
||||
{file = "zstandard-0.20.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:39cbaf8fe3fa3515d35fb790465db4dc1ff45e58e1e00cbaf8b714e85437f039"},
|
||||
{file = "zstandard-0.20.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f199d58f3fd7dfa0d447bc255ff22571f2e4e5e5748bfd1c41370454723cb053"},
|
||||
{file = "zstandard-0.20.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0f32a8f3a697ef87e67c0d0c0673b245babee6682b2c95e46eb30208ffb720bd"},
|
||||
{file = "zstandard-0.20.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_12_i686.manylinux2010_i686.whl", hash = "sha256:4a3c36284c219a4d2694e52b2582fe5d5f0ecaf94a22cf0ea959b527dbd8a2a6"},
|
||||
{file = "zstandard-0.20.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2eeb9e1ecd48ac1d352608bfe0dc1ed78a397698035a1796cf72f0c9d905d219"},
|
||||
{file = "zstandard-0.20.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:6179808ebd1ebc42b1e2f221a23c28a22d3bc8f79209ae4a3cc114693c380bff"},
|
||||
{file = "zstandard-0.20.0-cp39-cp39-win32.whl", hash = "sha256:afbcd2ed0c1145e24dd3df8440a429688a1614b83424bc871371b176bed429f9"},
|
||||
{file = "zstandard-0.20.0-cp39-cp39-win_amd64.whl", hash = "sha256:e6b4de1ba2f3028fafa0d82222d1e91b729334c8d65fbf04290c65c09d7457e1"},
|
||||
{file = "zstandard-0.20.0.tar.gz", hash = "sha256:613daadd72c71b1488742cafb2c3b381c39d0c9bb8c6cc157aa2d5ea45cc2efc"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
cffi = {version = ">=1.11", markers = "platform_python_implementation == \"PyPy\""}
|
||||
|
||||
[package.extras]
|
||||
cffi = ["cffi (>=1.11)"]
|
||||
|
||||
[extras]
|
||||
all = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "jina", "manifest-ml", "elasticsearch", "opensearch-py", "google-search-results", "faiss-cpu", "sentence-transformers", "transformers", "spacy", "nltk", "wikipedia", "beautifulsoup4", "tiktoken", "torch", "jinja2", "pinecone-client", "pinecone-text", "weaviate-client", "redis", "google-api-python-client", "wolframalpha", "qdrant-client", "tensorflow-text", "pypdf", "networkx", "nomic", "aleph-alpha-client", "deeplake", "pgvector", "psycopg2-binary", "pyowm"]
|
||||
all = ["aleph-alpha-client", "anthropic", "beautifulsoup4", "cohere", "deeplake", "elasticsearch", "faiss-cpu", "google-api-python-client", "google-search-results", "huggingface_hub", "jina", "jinja2", "manifest-ml", "networkx", "nlpcloud", "nltk", "nomic", "openai", "opensearch-py", "pgvector", "pinecone-client", "psycopg2-binary", "pyowm", "pypdf", "qdrant-client", "redis", "sentence-transformers", "spacy", "tensorflow-text", "tiktoken", "torch", "transformers", "weaviate-client", "wikipedia", "wolframalpha"]
|
||||
cohere = ["cohere"]
|
||||
llms = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "manifest-ml", "torch", "transformers"]
|
||||
llms = ["anthropic", "cohere", "huggingface_hub", "manifest-ml", "nlpcloud", "openai", "torch", "transformers"]
|
||||
openai = ["openai"]
|
||||
qdrant = ["qdrant-client"]
|
||||
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = ">=3.8.1,<4.0"
|
||||
content-hash = "26b1bbfbc3a228b892b2466af3561b799238a6d379853d325dc3c798776df0d8"
|
||||
content-hash = "56e8666167102cc23b605c8d91b26a62c0858216637cc281b866315da766ad71"
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "langchain"
|
||||
version = "0.0.139"
|
||||
version = "0.0.137"
|
||||
description = "Building applications with LLMs through composability"
|
||||
authors = []
|
||||
license = "MIT"
|
||||
@@ -32,7 +32,6 @@ torch = {version = "^1", optional = true}
|
||||
jinja2 = {version = "^3", optional = true}
|
||||
tiktoken = {version = "^0.3.2", optional = true, python="^3.9"}
|
||||
pinecone-client = {version = "^2", optional = true}
|
||||
pinecone-text = {version = "^0.4.2", optional = true}
|
||||
weaviate-client = {version = "^3", optional = true}
|
||||
google-api-python-client = {version = "2.70.0", optional = true}
|
||||
wolframalpha = {version = "5.0.0", optional = true}
|
||||
@@ -59,7 +58,6 @@ psycopg2-binary = {version = "^2.9.5", optional = true}
|
||||
#boto3 = {version = "^1.26.96", optional = true} # TODO: fix it, commented because the version failed with deeplake
|
||||
pyowm = {version = "^3.3.0", optional = true}
|
||||
async-timeout = {version = "^4.0.0", python = "<3.11"}
|
||||
gptcache = {version = ">=0.1.7", optional = true}
|
||||
|
||||
[tool.poetry.group.docs.dependencies]
|
||||
autodoc_pydantic = "^1.8.0"
|
||||
@@ -77,7 +75,7 @@ linkchecker = "^10.2.1"
|
||||
sphinx-copybutton = "^0.5.1"
|
||||
|
||||
[tool.poetry.group.test.dependencies]
|
||||
pytest = "^7.3.0"
|
||||
pytest = "^7.2.0"
|
||||
pytest-cov = "^4.0.0"
|
||||
pytest-dotenv = "^0.5.2"
|
||||
duckdb-engine = "^0.7.0"
|
||||
@@ -90,20 +88,16 @@ pytest-asyncio = "^0.20.3"
|
||||
optional = true
|
||||
|
||||
[tool.poetry.group.test_integration.dependencies]
|
||||
pytest-vcr = "^1.0.2"
|
||||
wrapt = "^1.15.0"
|
||||
openai = "^0.27.4"
|
||||
elasticsearch = {extras = ["async"], version = "^8.6.2"}
|
||||
pytest-vcr = "^1.0.2"
|
||||
wrapt = "^1.15.0"
|
||||
redis = "^4.5.4"
|
||||
pinecone-client = "^2.2.1"
|
||||
pinecone-text = "^0.4.2"
|
||||
pgvector = "^0.1.6"
|
||||
transformers = "^4.27.4"
|
||||
pandas = "^2.0.0"
|
||||
deeplake = "^3.2.21"
|
||||
torch = "^1.0.0"
|
||||
chromadb = "^0.3.21"
|
||||
tiktoken = "^0.3.3"
|
||||
|
||||
[tool.poetry.group.lint.dependencies]
|
||||
ruff = "^0.0.249"
|
||||
@@ -129,7 +123,7 @@ llms = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "manifes
|
||||
qdrant = ["qdrant-client"]
|
||||
openai = ["openai"]
|
||||
cohere = ["cohere"]
|
||||
all = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "jina", "manifest-ml", "elasticsearch", "opensearch-py", "google-search-results", "faiss-cpu", "sentence_transformers", "transformers", "spacy", "nltk", "wikipedia", "beautifulsoup4", "tiktoken", "torch", "jinja2", "pinecone-client", "pinecone-text", "weaviate-client", "redis", "google-api-python-client", "wolframalpha", "qdrant-client", "tensorflow-text", "pypdf", "networkx", "nomic", "aleph-alpha-client", "deeplake", "pgvector", "psycopg2-binary", "boto3", "pyowm"]
|
||||
all = ["anthropic", "cohere", "openai", "nlpcloud", "huggingface_hub", "jina", "manifest-ml", "elasticsearch", "opensearch-py", "google-search-results", "faiss-cpu", "sentence_transformers", "transformers", "spacy", "nltk", "wikipedia", "beautifulsoup4", "tiktoken", "torch", "jinja2", "pinecone-client", "weaviate-client", "redis", "google-api-python-client", "wolframalpha", "qdrant-client", "tensorflow-text", "pypdf", "networkx", "nomic", "aleph-alpha-client", "deeplake", "pgvector", "psycopg2-binary", "boto3", "pyowm"]
|
||||
|
||||
[tool.ruff]
|
||||
select = [
|
||||
|
||||
@@ -27,7 +27,7 @@ Any new dependencies should be added by running:
|
||||
|
||||
```bash
|
||||
# add package and install it after adding:
|
||||
poetry add tiktoken@latest --group "test_integration" && poetry install --with test_integration
|
||||
poetry add deeplake --group "test_integration" && poetry install --with test_integration
|
||||
```
|
||||
|
||||
Before running any tests, you should start a specific Docker container that has all the
|
||||
@@ -55,12 +55,4 @@ new cassettes. Here's an example:
|
||||
|
||||
```bash
|
||||
pytest tests/integration_tests/vectorstores/test_elasticsearch.py --vcr-record=none
|
||||
```
|
||||
|
||||
### Run some tests with coverage:
|
||||
|
||||
```bash
|
||||
pytest tests/integration_tests/vectorstores/test_elasticsearch.py --cov=langchain --cov-report=html
|
||||
start "" htmlcov/index.html || open htmlcov/index.html
|
||||
|
||||
```
|
||||
1
tests/integration_tests/cache/__init__.py
vendored
1
tests/integration_tests/cache/__init__.py
vendored
@@ -1 +0,0 @@
|
||||
"""All integration tests for Cache objects."""
|
||||
61
tests/integration_tests/cache/test_gptcache.py
vendored
61
tests/integration_tests/cache/test_gptcache.py
vendored
@@ -1,61 +0,0 @@
|
||||
import os
|
||||
|
||||
import pytest
|
||||
|
||||
import langchain
|
||||
from langchain.cache import GPTCache
|
||||
from langchain.schema import Generation, LLMResult
|
||||
from tests.unit_tests.llms.fake_llm import FakeLLM
|
||||
|
||||
try:
|
||||
import gptcache # noqa: F401
|
||||
|
||||
gptcache_installed = True
|
||||
except ImportError:
|
||||
gptcache_installed = False
|
||||
|
||||
|
||||
@pytest.mark.skipif(not gptcache_installed, reason="gptcache not installed")
|
||||
def test_gptcache_map_caching() -> None:
|
||||
"""Test gptcache caching behavior."""
|
||||
|
||||
from gptcache import Cache
|
||||
from gptcache.manager.factory import get_data_manager
|
||||
from gptcache.processor.pre import get_prompt
|
||||
|
||||
i = 0
|
||||
file_prefix = "data_map"
|
||||
|
||||
def init_gptcache_map(cache_obj: Cache) -> None:
|
||||
nonlocal i
|
||||
cache_path = f"{file_prefix}_{i}.txt"
|
||||
if os.path.isfile(cache_path):
|
||||
os.remove(cache_path)
|
||||
cache_obj.init(
|
||||
pre_embedding_func=get_prompt,
|
||||
data_manager=get_data_manager(data_path=cache_path),
|
||||
)
|
||||
i += 1
|
||||
|
||||
langchain.llm_cache = GPTCache(init_gptcache_map)
|
||||
|
||||
llm = FakeLLM()
|
||||
params = llm.dict()
|
||||
params["stop"] = None
|
||||
llm_string = str(sorted([(k, v) for k, v in params.items()]))
|
||||
langchain.llm_cache.update("foo", llm_string, [Generation(text="fizz")])
|
||||
output = llm.generate(["foo", "bar", "foo"])
|
||||
expected_cache_output = [Generation(text="foo")]
|
||||
cache_output = langchain.llm_cache.lookup("bar", llm_string)
|
||||
assert cache_output == expected_cache_output
|
||||
langchain.llm_cache = None
|
||||
expected_generations = [
|
||||
[Generation(text="fizz")],
|
||||
[Generation(text="foo")],
|
||||
[Generation(text="fizz")],
|
||||
]
|
||||
expected_output = LLMResult(
|
||||
generations=expected_generations,
|
||||
llm_output=None,
|
||||
)
|
||||
assert output == expected_output
|
||||
@@ -1,16 +0,0 @@
|
||||
import pytest
|
||||
|
||||
from langchain.document_loaders import UnstructuredURLLoader
|
||||
|
||||
|
||||
def test_continue_on_failure_true() -> None:
|
||||
"""Test exception is not raised when continue_on_failure=True."""
|
||||
loader = UnstructuredURLLoader(["badurl.foobar"])
|
||||
loader.load()
|
||||
|
||||
|
||||
def test_continue_on_failure_false() -> None:
|
||||
"""Test exception is raised when continue_on_failure=False."""
|
||||
loader = UnstructuredURLLoader(["badurl.foobar"], continue_on_failure=False)
|
||||
with pytest.raises(Exception):
|
||||
loader.load()
|
||||
@@ -1,40 +1,31 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"input": [[2059, 7341, 527, 264, 1912, 315, 658, 10753, 677, 81, 3581,
|
||||
7795, 32971, 555, 264, 7558, 321, 351, 61798, 30535, 11, 4330, 311, 8254, 342,
|
||||
484, 1776, 1220, 389, 279, 11314, 315, 279, 2010, 11, 323, 281, 1279, 278, 66079,
|
||||
430, 527, 539, 75754, 311, 279, 2010, 13, 18766, 61535, 527, 21771, 2949, 279,
|
||||
1206, 1037, 24082, 613, 318, 269, 4055, 320, 269, 24082, 613, 3893, 8, 323,
|
||||
527, 279, 13219, 1912, 311, 279, 426, 4428, 42877, 320, 66243, 323, 24890, 570,
|
||||
4427, 8336, 13334, 279, 4751, 330, 939, 847, 1, 439, 459, 42887, 5699, 2737,
|
||||
69918, 3697, 315, 921, 2159, 14172, 339, 9891, 320, 11707, 321, 351, 61798,
|
||||
7795, 8, 449, 264, 44892, 12970, 79612, 11, 1778, 439, 6409, 65, 86815, 82,
|
||||
323, 53265, 582, 276, 17323, 13, 61536, 12970, 523, 2159, 14172, 27520, 598,
|
||||
1778, 439, 2493, 5670, 301, 1815, 323, 25227, 3205, 355, 1176, 9922, 304, 279,
|
||||
60434, 1122, 26572, 320, 19391, 12, 19192, 11583, 705, 3582, 1063, 31376, 1534,
|
||||
523, 2159, 14172, 339, 8503, 12970, 29505, 527, 439, 2362, 439, 279, 36931,
|
||||
31137, 869, 12734, 320, 21209, 12, 14870, 11583, 570, 578, 24417, 6617, 61535,
|
||||
320, 9697, 613, 5493, 8, 527, 3967, 505, 279, 23591, 84474, 11, 922, 220, 1049,
|
||||
11583, 13], [2059, 7341, 2134, 304, 1404, 505, 279, 2678, 50561, 74265, 939,
|
||||
847, 320, 36, 14046, 2985, 46109, 281, 5515, 72, 705, 264, 5655, 9581, 9606,
|
||||
430, 374, 1193, 220, 1114, 2960, 86366, 417, 320, 21, 13, 22, 304, 8, 304, 3160,
|
||||
11, 311, 279, 51119, 44892, 320, 73262, 2910, 77152, 3666, 355, 705, 279, 7928,
|
||||
7795, 304, 279, 1917, 11, 902, 25501, 13489, 220, 717, 37356, 320, 1272, 10702,
|
||||
8, 304, 3160, 13, 2435, 527, 1766, 304, 682, 52840, 323, 527, 4279, 311, 43957,
|
||||
709, 311, 220, 17, 11, 931, 37356, 320, 21, 11, 5067, 10702, 570, 2435, 8965,
|
||||
656, 539, 3974, 304, 80744, 11, 8051, 1070, 527, 264, 2478, 3967, 20157, 11,
|
||||
1778, 439, 279, 17231, 44892, 323, 279, 15140, 44892, 11, 902, 649, 387, 1766,
|
||||
304, 2225, 67329, 977, 323, 80744, 8032, 18, 60, 71923, 617, 264, 18702, 315,
|
||||
2761, 14991, 294, 4351, 645, 430, 36236, 872, 6930, 505, 5674, 323, 79383, 304,
|
||||
5369, 311, 18899, 872, 15962, 30295, 13, 2435, 617, 12387, 7437, 315, 8454,
|
||||
481, 18311, 13, 220, 26778, 9606, 527, 72627, 56217, 11, 902, 527, 44304, 430,
|
||||
527, 520, 279, 1948, 315, 872, 3691, 8957, 13, 8593, 10507, 2997, 279, 52835,
|
||||
44892, 11, 6437, 44892, 11, 2294, 4251, 44892, 11, 296, 29886, 44892, 11, 270,
|
||||
86524, 44892, 11, 323, 24354, 2025, 44892, 13], [2059, 7341, 527, 10791, 555,
|
||||
12966, 369, 44892, 13339, 477, 44892, 1913, 19724, 13, 9176, 44892, 22673, 527,
|
||||
21699, 555, 3823, 7640, 13, 8876, 220, 4468, 15, 11, 44892, 22673, 617, 1027,
|
||||
11293, 555, 220, 6028, 13689, 10213, 505, 927, 69, 11218, 13]], "encoding_format":
|
||||
"base64"}'
|
||||
body: '{"input": ["Sharks are a group of elasmobranch fish characterized by a
|
||||
cartilaginous skeleton, five to seven gill slits on the sides of the head, and
|
||||
pectoral fins that are not fused to the head. Modern sharks are classified within
|
||||
the clade Selachimorpha (or Selachii) and are the sister group to the Batoidea
|
||||
(rays and kin). Some sources extend the term \"shark\" as an informal category
|
||||
including extinct members of Chondrichthyes (cartilaginous fish) with a shark-like
|
||||
morphology, such as hybodonts and xenacanths. Shark-like chondrichthyans such
|
||||
as Cladoselache and Doliodus first appeared in the Devonian Period (419-359
|
||||
Ma), though some fossilized chondrichthyan-like scales are as old as the Late
|
||||
Ordovician (458-444 Ma). The oldest modern sharks (selachians) are known from
|
||||
the Early Jurassic, about 200 Ma.", "Sharks range in size from the small dwarf
|
||||
lanternshark (Etmopterus perryi), a deep sea species that is only 17 centimetres
|
||||
(6.7 in) in length, to the whale shark (Rhincodon typus), the largest fish in
|
||||
the world, which reaches approximately 12 metres (40 ft) in length. They are
|
||||
found in all seas and are common to depths up to 2,000 metres (6,600 ft). They
|
||||
generally do not live in freshwater, although there are a few known exceptions,
|
||||
such as the bull shark and the river shark, which can be found in both seawater
|
||||
and freshwater.[3] Sharks have a covering of dermal denticles that protects
|
||||
their skin from damage and parasites in addition to improving their fluid dynamics.
|
||||
They have numerous sets of replaceable teeth.\n\nSeveral species are apex predators,
|
||||
which are organisms that are at the top of their food chain. Select examples
|
||||
include the tiger shark, blue shark, great white shark, mako shark, thresher
|
||||
shark, and hammerhead shark.", "Sharks are caught by humans for shark meat or
|
||||
shark fin soup. Many shark populations are threatened by human activities. Since
|
||||
1970, shark populations have been reduced by 71%, mostly from overfishing."],
|
||||
"encoding_format": "base64"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
@@ -43,7 +34,7 @@ interactions:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '2675'
|
||||
- '2001'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
@@ -156,230 +147,230 @@ interactions:
|
||||
P/+czq0V1GhW1ZwGYymlI96ZZ5Rc2IW69iZm03vrvmHxU0QMOicfi2fMQ5BMYTS3yaWj18Mrgp2k
|
||||
UKydxir98VD0401/801PpAfqfMfD52i0uqkJ5hjGyzGj2mq86zOZuDNsSZbhoDMQmn2kJxBGJaXa
|
||||
sD0hpthaDZqVDTi8wgqxNLlwsOhpqiPrnpIgKc+/+RiJpTsw8jtPKK7ihijdcfTn/RquiMhtj3Wv
|
||||
Fxl5ip784+/Yy/xUF47Fifwvr52r5XyTKGJtXcmcHsWO/fQ3TdoVEWf35Pe5XgioEt8T1q/erWNA
|
||||
uysS5fsqErwLTUeFBw92OtJx5I8kJd+1cwY3zIufH0TjjfP3aDcx+78BAAD//6R9SY+DzpPlvT9F
|
||||
63/1tFgMZNA3zGY2Z2LjVRqNwDvYxiyZkCn1dx/h+mmkkfo0cyy5qILIiBfvvQjbLKlXi2LiExge
|
||||
heyS80coBZvqER6WkRA7vz1KEd0jGa7Be/6br7Tc6cIGyPygYlQmTczet+xoGvo1IthbDMVUH/Wv
|
||||
/vDgnGwxfy1ha8jXrsfzo/YUbCVf3vCm6z2x5u+TQ6fnhaB7yQwHmz0asla4P/wnk35GY9CsNXO6
|
||||
Pxa2ldLSvR67YKbRwIKZCUI4Y7SF0257J8HTbAr+zI9HY8HbC1ns1qdiaNPLDNVFs2KWLc2dLpHT
|
||||
O3DLLhjWutk0n4uuUIv6zsjVTtvv7BJt4djPJYLNmmfdk982sFKSmHnEYU5f+TsbkvG8x0rBRzQM
|
||||
n/kWatHcsbhWbqte9CWFM6cPPPzO8yo9a7TjxpElsXgi5ni5/PNziS0GRdBfPIt3uMXy+bwW/TSv
|
||||
R7tlpND7dl63fMoP41+/rYD/+h//DxsFyn+/UVBZo0nC/jJmfaonshFG7p5tHSlDQsnmRzSvgpht
|
||||
KkXENK7QG9G+mDo+3B3+DM0OnEVr4GyR9q3I4BwYwzZqCa4/ZcZLx3/DPnllxN9p54JLIaJgNouA
|
||||
JJ4dCXGkZo1UyTWwrn0Ogn/Tbgun88ejr3sexYO7um1gPvAPsUAx4vr8aAyQi/7JlreqRv2ZbwFa
|
||||
flvRwzZ/xqO7tSdGpT6YK+d9/JUO1wQZ+6+MRyFdULsN+RbN/G81MZBDJrL0e4Wt37msKPbgjKeW
|
||||
unDwLJ+4uX93Pt7hdIUy0r7MNSgqxrGtrmg4PFdsJX98odh+Z0Ni+D5xD+EJfUFfqHAc6zO++w/m
|
||||
9GmupUaIBpfWB+mR8VfavQGLrUbsYz4vh6MkdTDOnxP/8cdSbB0nQcdDqbCkSojg0tUCgHyBidcp
|
||||
XjvOdTtBiz3bs5hmj4LZh3WHkH2OibNr67IdldaGji3feAzbTcztq+fCc7ZZYYONqGTICSp01klG
|
||||
R+63LRUK26AQZybVevuUcRMuDbysUWc7V6kdOlS6D0c2a1k4y2rEvBDtoXcjnd7dlSXGkO8TeOXa
|
||||
DQvb9+MBcr1GN0/5EGzfNUcQx3mbeNFGxDIebTvKbZdALXkqnS+rJhsMfsCA+lwmvlRFQu3SykdX
|
||||
Ahk7kuqVsT4cKpCHQWFR9dmi8cgvGBnP84o4XvstxTr9VtDu5JGsz3lRir72IrDqIsIazWghdrC1
|
||||
9V/8VkMSIVlaFSkA8++EUNTHg/3oGrBvQcCCTTZkbFMvLcA380CcubRF3c2nOfq4cUYiSheo3Sh9
|
||||
gOjyfiDLueQU495vapReswvBZD84THZCH+ggCAs4bbJh9Wg143giHyxZySvrS/87wLhTDySBBGf0
|
||||
fI336GSgNTVCrUW9oS8Aruc3I96bHovB1LELi+AUENfKh5KDk8gQ18ecxMOHCdHnHwvFs2wkPihW
|
||||
PG+U9g4e4nvMT2kaD83W4TDTDhtiv8Eqx8PqVMOwDVqW5tA4fEalI0jpsGSR/MkENw4bLLWbG2FT
|
||||
vWaC1SvbsG5RSuKH9hJjons++pDlDBu6zQQfriHAR8cli5RLUgyY2TYYweFNlXeG0GCsigYF2nGi
|
||||
v9xqeZlWFNm5KzO83A+tIIejjE7lPmI2kZCgWLfecCz3mA5vfyhH07/bJt7ZHYlXl0jML6FOYf+y
|
||||
A7peHpxWbGpiwVdPCrIWkol442AXMX5FxNn4r7jLH62NIO4tqh7zdzzU9SmAfDdHFI1aJcbSf3BI
|
||||
mbki171nlJWfmykwHS+Iu6wOxXBllgFrz7pTuGgLNBxyiJDSF5j5y5Bng6rjNzhouJK42J8dMTiL
|
||||
LcTwkohlpvuWpSGvAPvdlfhIWmbzmxNd/34/fF3WQtiOewXOlgkJ1eybjXM9wnBp0IIFh6pCYum4
|
||||
LnzC5EOOpWRlQ1Wvc2jv44U55zxzWo2ip8E8+cFin1oFayG/w+NTrqhYpH3J00pgQFtzRryt8nEe
|
||||
122QQrONPWK30gkNSs4jcI+5wexTmjpf+XAF9DGMHfvhE5srTQOsL2o8j8MejVLbngFJfkWsTimc
|
||||
EedaDk7jU7Z6IrWsK53I8GyMlkVkvJT9XY+pMWe+ylbfyz3jOU8p0P7UsYUPS+eH10h9qQssdm3Q
|
||||
qtfQBPjexYusOFJK/gwVCnVXOeTEs1Bw1t4x7HY2ZstWqooWOcEb5IErmB/zQyseShWApfZHbOq2
|
||||
LYS5IgMUSWUS4pFrybdt4xrKS/Xo6PB7zK6O/UZPPdH/4qeyUFZhvJzuLGCVU6jisKHwNowTC+Lw
|
||||
O9X7s4bD56Gz+DPOW24odAD33QfMkZW+bTVYN2b1xSNz2uqUDYpfGZCoXUW8m9SL4ZTLtvmMjJ5F
|
||||
6wtFXZSbNoiifxNfhCSrXWXkYMYdsJOudaLjkgmoYLMLbpRLl3GUtj4ot6WFtTL8IP6BQwRipxKq
|
||||
m7ZA/ejXA5JSvpzqscuqNJcbgxd9Q/xZthPdfBtpsMHrHfFvUlj88AloKtZ4fs0yMUS5aRmdG5gs
|
||||
OeyPMXulNYYLm+3IolYOMUsPxw4hfl0S68OfzvDRl29QL6eI6qPWF2zI6QBx4yYsPIRu0fmK3sCG
|
||||
ZhlxtHztqNUh64B0H8ZwaC9Q6ynUNqZ4k2Rth0JW/E6D/YkwLD3v63LYb1cBRNkhIQGmHRqjVf5E
|
||||
gXPY48dZOcZMy6QA7HaoSLzPnuKX76YROUdmgZLHo7fKDWjC5MSsXauUYy/pgLCzBxIvbJKNj61l
|
||||
m8t2wHQ+2mbGnml9hnnbb4m7aF9lV9WnHJzIHcjlfTec2mVRDVmj98TeKrLTr7eLQH/kxkjsRYud
|
||||
7od/g6de8OyNNvFoXr0zWqsUk9j9cNGTbbgBpT9hOlOas/OVr9hFNzdWKNI+pKgvW7c2T+GqpKql
|
||||
VDHXYWdA/bwVLLKpK6b8qo2Xp6xYko1yyXBNNsgwFluyb0OGety+VX2m7TbYuI3n8pVflzIsDF9l
|
||||
0/9HI/bLAB3XzZVYG4jaTtnG9q9+2LKs6mwo/HcD+2vmkGhDHTG0q22N0L4dqOGM13YwdLw3jp5t
|
||||
U948KkdYTlKh6f6weRiHeDhL0hmF8RDQbxmWxYAeNUerd3cnN9s7lOyVsycwS6wo6xstZtE1wH/9
|
||||
w9HTsJ2XSmeBanGG6yys0fiVOAa0OYfExv63pMnVveoTf8TyXYlaqjsrH3bXbMmm/oDmE78xz917
|
||||
TpxASVt+4EeANjc8Yp/SIaaeHmI4haQky0MVFpRc3QSF2SHF850tFb0lAfzh25RvQlydqNIm/KNQ
|
||||
f1okhNJoet1oa+LNKyZo39Z7MC9591e/6o6vffQ54dff9b94Q7GoNyR60icSc6WpUZL7wY9/OJ3t
|
||||
HF0Qq9ORqp5mZn2xWufmmcGJJcu97ozyYQ9wXj5eBIfNzRHT+BP01XHG4vESFPzd3lX98XmsyMpA
|
||||
ijOeJeQiK/eBwmd/cIaI2RuImPkhFju8Ylal9y1kj5qzUM3CYjxV6gytyu2M+F5qxbL6+Aagbq43
|
||||
PJzaXcz7a4RROS8dRjaIOIJSmDb4niPB8zFs1ec1moFXH2WCP2PUDme2sFGyp28WR5kjeu2w75B9
|
||||
PNlkcQWvpbtrLINxHwpyvY2xM674BWDNpJAt0MGK1VdavSFxdi5LcoTKXk/ZGyZ+RRHaGw57Q67B
|
||||
DaovSb6ahcaNnnQ/PsqI6m3jfs4L96cfsEnGtOwlSaM/fCZLn47opz/Q1I/xd2UfMhGFfIb2n4eJ
|
||||
X+8MiS6ssw2c1k1OmUdmJfd57iOhqCuyWtmeoC7PE0A3/0275n51Ro9fz4jevCsWqzZ0xAJ2R2Rr
|
||||
hYVn8diVg8iNFL6NtmLR6UIzql4XlcEL1rB4bdNCiNodkGINglzrj5Pxx6GY/fgnI2L/jblyyCvQ
|
||||
UI8ZVsitHG7SHMPL0C7M9v3G6ePt4omc43FGZ+0ex92XBZHhXM4bduqSc/Hjt2jG3CM5vOxZRoe0
|
||||
VYEOI6HZ+3GPR+Hfa/TbYAyWkl3078NhBnQbCOYdqlEIlFML6ZLfYWhH4YgdNTU99iyDhE61QMN7
|
||||
m9jo5s0/2GD7m8MSpX+CevMBC0naZiNrqwr053WN511ulUqquzIckhchh4t2yBh3Fn/8n0VOuCjG
|
||||
Ld+doe+LAX/vyaUQ7mFdwS7XX3S457eWQyjANCS3xkoW7qb6aiPwiCmR+HIJUbdRtAH98H9RK/P4
|
||||
HfMThQReiLmHUEf8nPYYMr8LqA4JyoZlXeTGY1kSLBn3IRYOXDZGwswbS6YNWX5ovzIo22BHtX3G
|
||||
EHf5JpkmtDNmWcosHi/VjMLW2T+pvKzm2V+/GxsHSNiGT8Q37RObv/xz6pw7A9ratbmMh5SRJ6LO
|
||||
6DzuKYB5pCyMq0fBC4f4KM91QlbOXomHxu9r2Cj2EevmJc36kF8xku3zl9mLlsbim382aOpX7JJ+
|
||||
wkyUjo0hX9R74qCwQOPo1xyonnhsOm9HdOFAYe5GK4KfdzX+q6/v/Pllq8Yzy/ZeYxu4G5Rk/X4E
|
||||
5Xi9kgr9+mfWhiBoWY0zM3EOLrNW7d4ZG/+Rwjx3qj9+1LmKPkDB4EKWolpm3JG0K0rHZkZRPOrl
|
||||
ePSfT7P9JhnxTqkTD9bqdkd7NhMM14mP+D7tK2O6nmG072PxcRbw05/EWqWnUmj5O0Xrl3Vl589+
|
||||
Ho/+asNNfinOLDknQ9brfr2BB5s5P/xovxd+lpE0cI+KQBnKXtMjFz5dtWSp4I94dPxWA/Dbjrld
|
||||
vir55zpNcN3g86cX53PHUs0b3RQEpx8XzXvIc7Qfm4p5svKKhypHG/jsFIc521yUPK/42yTZVqWz
|
||||
XXN1fvoU7V5WQvw6f8VjVakYPrl2IKc2fIrOenQD2mbbO3NFJRcj8uvgx3+YfwtTxNW0vSJdjhYs
|
||||
cD8SYjM4bSBs3C3xPe1SCCUcXNDdwGP7WaYgHrSNCjQd1xgU+1MMrU72aNILVEFSLGS0jWpj/7IC
|
||||
Zt3zWzna2ygybofygGda4qEh3q4MWDe6QkdJAtT7q81gPgkssK5m34L99AuzrxKxFqnR8q7St8iO
|
||||
+RvzHBalutKDDj7D2E/npZdds40HUPj5gVUm1dk4SDpFFho6hp/3fTxakgzwes1jFtuZW4znaUI/
|
||||
+TPEVtqoVIy2VeGTvEO2eD+Ctr9urY0ZqV3HvD3Vi/7bVoA8IkkkmPRRFyr6EUUq7ZgX5G3L9ofj
|
||||
Hq3K/WzKVzUeEc9lJPpiT83cu5a8CNUEUp/uWRBlqejUVaaB8byucPe6uELZtXSP7opyI16UmYJ/
|
||||
qrHT08XXZt6tGoue8+yK+ufFY0Fv69lYr453dO+qFxYfaSe4xfMr+JFbMV+ErBhofRygVWRGFT19
|
||||
xOO37WZwYMBIGvm87XJl2Pz6PbNPbVIKEQ4Yej0JyfJxydHw609fT0V00zzcf/DjStMbRVP8ZLF1
|
||||
IihPyZzFit2iEV29N9y/K4eUR4W1vEkrjPrlvSBbV+naVsvmAYTHY0Uu34tVCO8aVFB/7gwrBwln
|
||||
gwf1gGLDj1h0/uwy7lbgAt9cR7Z8Z6IQgxPu0YQv5MbvaTls/U8E1TcZiXeQcKEeDucK3fDmQqLx
|
||||
kogxybUnXE9kRSV+H8rWCdHZuIWrkFgbn8eDkesDBEHeUPUmfYtB30YcWkNzyNIJcdG9/ZeBForz
|
||||
xPNeU7P+9/yb70oiS0myUeez6I6C47EmGQlnSKSHI4VeDhQWZGEgxAzWG3OQvO1Uj54jomuQQLyn
|
||||
DZaK8RPzW8UT5Bd8S/DOvovekHQfGn5ZU6RobqaOoUyhvYsLSS4Xr5jjmqTgXq6YubEksn7iAwZs
|
||||
rt7EF15ouCqDbUrM3bLsFl6QsA5rFa5jbbPDOX+0bOkkLhKNIxNc7nE57B/lBmnvL7DgctkjodXJ
|
||||
EYQcFMTNqVmMKwlcWPTXHOuTPuQ8bTuEtTzBcpf3Lf+E5t7Y7GlCPC91S6XzvwGS5ehA7PzhOWJN
|
||||
1Qgqb76e6qlued/et798IovJDxxEfczRlG+US6GEhtF/BSCnw/zv9R+ew5z42oS3aTbKh+vs56cQ
|
||||
N1ESZzQlTYaFc5AZeTS603xDJUGfz1MwPHyeqKPSLIHmJSsMXxFvu+zRPsEbW8o8Oa/LMVudNLQV
|
||||
2yu7rGwPDQULIyDX9YP5d+UZM+qEV/TzZxfl4dG2dtq8oX/ePIK5d2zFTHkGgMycM3fXuvHYt10H
|
||||
E97gckalou/1SIaZ31aYkX1cstnjrEEmzSSWrLSnGNJ6wxFYw46Fk98wNryYweRf4GbiJ1M8Ixgu
|
||||
RcWiOLTQ/OjYM9RIS0oSFent+MOLSW/h9nsJkHE6nM+wotmXJFO/ovUhO5uT/iE427NS5IfUhZF4
|
||||
G+JpSlXynB87qPXkxhLt0yDu6Qtskmyv0tEAp1Vsfu3AEYf6509lIrhaW6ifl4IczMsRTf28Qk/7
|
||||
MqfItONM2FQKUHo8+sTWlHXJadoCchu/Yc495yX9+QeNYSzxzEZyS4/OcotWikWpNOm1QfBbh7Zv
|
||||
ajO3kHjGA0nzje/8/iXnSV+M920QgX48NCQUo9QOj/rUQHaTFszu29jpu8qkP/z+6el2qgcLGvM0
|
||||
aUD+cJjuBB0sd4uIRLrNEINspsF9c5PYqk7iYnCZ3cC7q2K2mPx7/nb8LdzDlceiGbWQrObcQv0J
|
||||
x8w101fcUgqR8euXE39weiIh+NUvNrwLQcyk6B+/ZC0OViusw0k2Tus6xwyNnjP3rsEbnpHWs+SN
|
||||
jD++CYuW18yZV1ox6QEXLVdnG4/q41OORBIzWKzOZ5ZRv3SGSl/KP/+ZLDRl7wjuhBjMfBGxpLnP
|
||||
/vAG/fi3OfG7xjnswPx6MqK3BzmXfS6JxKylpYpFp2BH2UnorWmIYfznl5ePJkdZua/I4tKmLV/q
|
||||
C9c8r+uETf22HeNcq+Hv5w+3HfXePhKY+DAuomxqJ4cTRsudE7HAzsZibj52FtKGYfJvtUCodXun
|
||||
8DA0RiLPzgqRXAMfpUxaUUHCmRiIoltIc6OQeYmybPnrGnVI9uSYnPrLtMHMC0DawEsWL7QPGoJH
|
||||
l8Jh3Twn//NTTP3Dhwnv8GfSYyJKny4cXpbL3LJSstpldg2/+YQ01WdFWGgYq5tZMLeURsRmyjOC
|
||||
5fh9s2le0vKtQyrj8Cl1Yodt3/Jb+93CV1py8uPTrILMNj+D6Kd8aothufUjSD27IhEbUUvx4YKN
|
||||
ZX3saN3bekH3FVfN27qxqPrSUDb629CAw8t28ai0zBlz/5n/zWuwNH4dsUibDgz7SqgRfFTxN3+Y
|
||||
ET8n1iXV2mHhpAncvmRJrAbCWNlvrQEN/Donblntsi9u3zJCce+yWM0sMS9CXYXkZcn0eauuxXfF
|
||||
LzP405dzSUY0uCYUbMPnE/7cxTjNtyDb2XvchXZT8AlfwFudI7Zl0iC+kqR1cKfpgUTrCxbdxLfh
|
||||
u43DPz+EPw4ZwJxfTyS4VW42yI/HxnzI4ZdZtp/Fc+LEFeDjEZPVHh1abl4XLkz1j+efsEVdsLrZ
|
||||
yNZOFlmutWNGr2nvg6S2Z2zs7E0xGPyG0evwNFn0zPyCp7p1hXW57Yj1aLfl5B8f0VhFL/rZ2beM
|
||||
1tX4Rv3nefrhecne4ehDuu+2LOLZq+hYfWwgyN2ctj//vtFjH0ay3NDTaMuCbw5bMCZ+wJZMKgWF
|
||||
UMxgObZv+ury8Ke/GjhE6MKWVl6W48V/RqbUOBZeqXc97lTdr5AlRZg5lzRu2S3UK8Pcmupv/iOU
|
||||
af73wxM2zVNaxWzLN2yY5BG3ynE85Y8M404+MIdJu0wkIZdh62yfFH3GQzsoW8fSp/kMixZa9099
|
||||
JcHxSDypYoWQ4WQgocgrNvXbUqhUNKjjl5BESZIXvV7JKizrvMPqqBnFaPlVAHNraMlm8usnPFNB
|
||||
f57XU7+OS/mjkzc65Sij2qdyxDjTF9R8MHDYKvdm7Z8fNfGnqb+UZT3xK5heJ5u9H8bjh2/xr3/R
|
||||
VtGqTEhOsDc2Y6OSJPmsEEuulgvAr850vu+C82sow96zInaWxnVZVyx4QlznOYl7uysqLpkz5ChO
|
||||
TiXDy0rxhU394zMM3z8lGk452BDuHJtgLXkh/qgMF8a7cGj/nGzIfZ0f4fuSdRKpdJGpTVolQHXs
|
||||
ESKNbsnl64KaP3+WBIkrxAkyjqb57M8vicd9JSVw3imCRBd7RMPE983zNf2yMM/aQuxqPMCRQD/x
|
||||
XSXj7Gq7cJil5l/+D3UubDT1J+LbdF8MWq5raGFej3/X04yfEgRbyWDW2Iq2d3I9gurwWBDsjL0z
|
||||
hI8u+PEj4s+lD3p3aeeDI3Y1hYcdCLYNjT1MeEDwq7nEYuqHaNLzdJj0wVDW6zuaqd87+c2HhzSH
|
||||
GoHhJJN/zLK+u/oVNIa2JKcyLLOxWqUcXlARtuo+y4xnoUqNCA0W2xGJomGzXR3/fzYK1P9+o4C2
|
||||
ZY0lKz61YrFq7tCvI4eF8j0shkQSA1rYzZF5aWyVipt/cxgLkbDoKi1bnu96CxwyA3z5BiMSlpyp
|
||||
UNRBTK7dnJbjcz1gk+7IjM672EdKuLz75mH2NYkvyw9nsMu6Qlx3NCyh581pWW9Z5hz0lrnRPS2E
|
||||
/NFmYMc2EHevfbI+zw45kMtO0BMhXUtDiyWwutor+p0GaZ1YfWpYQVMy73OLCwFrqqGLsZ/j6qm/
|
||||
yrY9HW0Im02CTbGCsuvneYDu74wzl+VbR8BzPJvl2AcY7dVLK9yi28OZLRMqV8tlq+zn5QbYQ08Y
|
||||
3iWRkO+Hmw97422TlUA4ZvLT5qDH1pl5j+SWDbBN7ug83AcS6ZelGJxxo0JzitdUr2+tM8jvWQq1
|
||||
UiFi15koqePUR7gmO4359blxGEehimZ1c8Wwxo+YL9dNAt9Ye7ETdb1WOMapArgnC1y5SBGN8OkT
|
||||
3KSOyKYFo6yjLOnATZoIT/GfEPiIwUXzGVU3ygZ1Rh9XEN6+GlsZNCr77/N2hfaT2uy0iy1H3I8n
|
||||
Dvm+UulMJ6+WqeslBxv5CjVG0MWQLCXNmCebLQW2q4vhtav3KFdWCbHKq1UO5LSxzVm3tEns6G3R
|
||||
+Z6twfBWEjpnco/ufcRrFJzjgmAuHkI8pbxGx9Nmzch42WZDMI4zI6d7hwWe2xU8aDQZTdthdKTz
|
||||
pxguh+Ud0GaZEpsTXyj+Mr1DeQCdhMWnbIW9e7+R/BokFvlmGwsN1YBovXSxdL3b8eig6gke7t3f
|
||||
eZfjfjRdNJiRSlbhmSCx2lcyPLhkYcRMt+C+VtzBalCEh8/nGLdh0FpwuUQuCVjZlD2KFI5mjbtg
|
||||
2+k8+Nk42hDflJI2vIkLrsxGgFAKMnKRIMk48eFuKPy8YokydrEwYXqPW/bOyGK/3IpGW3oNrM8z
|
||||
i+El0Z2R3u8dst/JpGDWUFA+aAH61aMLm6IdeHfWoNyXe6wGJ1J0xXO7hU6ZrclCWHkxLrB51ttZ
|
||||
HLFotk0FjUVyReZ9fsRANn1JLfP5lt45ubL4lUktTWuSoMvTToibeG3MsmqWgF/SGWXdO87kA3Jk
|
||||
YIXtsGUXvwWPZ7UMGLs+O/jJse3eZf5Gp8w9UXBv62KKrw+XQClIYK2cQnka8yOkiYyZvx3fGUfL
|
||||
S4I8zFw6P1FX8L0+vecN+/5Kxw/kjFqQbtFO2h+wfCB3h30kfQBrTBRyQ89bPG5tLQJbbzfEn7N9
|
||||
y41WqyEFbUnwYxU6InvWW2APlFCduq+2x9BwSMajTnaKVxVsuV91MNseFKzbVlIOldqeUbInSzoq
|
||||
56vTqztko0Uvh8zX3pKg32FxNcth/cIP9+iWw5M8K+QdLxlZ2aHj8MNCsVGWb15Y3Y5+Ju9OegSP
|
||||
c7sji62eOQ1aXjAYJ9qSZdVoYjgEagT9gz2Ys7UyMQbq7gyGbFzI8sDXGdvE7K0vfB+YFd5T0cmn
|
||||
xELIzBbM3tYE8dVHatCriV8s9nfMaZaxbCFBdk9iseAdD9/jiuuCHJ5kIWiLhrTjV0B1bjAyxY8f
|
||||
yWdj7G95ypZmec1Yb1MVFWv1Qc307cXq8X23TOItArIYvErwff3ew9X2PebdlJcQATtvIafowWJv
|
||||
ts9474xP8PjqycK9HcY8nK1TdHm+GiqSfulwrNsYNLUXxArnYaueDOsMRZa6ZPvUvXJevDoNOfSz
|
||||
IeHT+WTD4J1tRErXYsS8btpxbd/fphuOLsGL5tr+8g+droJRcaSvbMyz2xHWh5VMpSlfx7HEAdLU
|
||||
YEe8tB1EE71yjJKTu2IXJUxjWV3NMTzyeovVFgdxr9j5G9RyWVGjF3YxyFcjgQ1qCJUsvXVo7y+P
|
||||
YN6VI2ZWfCr7AVb4r16jlS45QwMah9l2p9DZq9ezYZaktplm7IDfwYlkn+pdvP/wXeXaquWZQBFo
|
||||
Z6li8bHQMv6Zm4B2ZHPF/U5+C16v3QaW3fL5h6cjqzuO9LDS6WwR4YLuctlF1+SgMeINctzvO6wh
|
||||
+XLOaNtdtqibJUcLwmS1xcg6IdSdF+vE3KwdIJ4IU9RrdhOgJ4xLFmXXU8kbbfVEt6fY0X453zn1
|
||||
a/a5o3hDGzLliyMmPNCX7d1lC+V8jfutrQUQKPuEIn4KYuZQEwPTsEpFNIpSmBC40F6X6wlfbu0w
|
||||
pEMOodSPZEX3ohzK0rEAhUPFErarM1HpYMFskCI8Wzw+8UhDBsaiV0Oq50+9oI22uoNQep8khJti
|
||||
3O3Tt1me8jNb6g6OBzZ/dGAn59uE15ozFHtphq5dMZKVhHPRVu/sbU54R7zi2jvj+7nUYA6oxeMC
|
||||
7mLsnsyGd3kkJEVhldFlDBYIO/niwQuPDkufx+GXL3SmM0WI4tUZv3oh2D+msYj2lw1026tCpnoo
|
||||
urjLtrC+H18kuq/Ccnxo3yeqC7XDUhMty3Z3WXSm5rcGlZLkgIbeDPaA0Mai0ugX7WC3XfOLL3NR
|
||||
wYsmJzMVTh+4k2z8NILth2cAzd24YXE/RqifxastjK7msi3BC8Tl9vSEND625FgNOBt2DxODsPGX
|
||||
zriXZty7JQY46v5OteJB42aTd09j89m9GZ5dlpnA5ovC97YLiWu5nei/vVCNFz1j+rGu77ZDyrKB
|
||||
h60vmT/TYzTVCzZclN6Jf1t9xZB+kg2Ikzfi7/naocGRPB/wEC6IddzG7fz9LgNY+Wtr4n9DwU6C
|
||||
vNH5OD4ZrkMzZvM6eMLVap54vu7f5TgWtm9uK79nQVidf3g+mMo3O5PQPnhZR9O7DKfv0SH57TYX
|
||||
7Pja7aH2SpsF++bldJqW5tAbxpr4t8puh9jwqj88tqUldhSFcheEoB5b4AspVOHTOyqMOKH6/rQQ
|
||||
40cPalSeTIPY79cik+N3HwFzuxPZd+IpmNimd7O9PWU8WxI9rj+zuY+Kq5TjsmJOOZwXpwQ1Cn9j
|
||||
s7Drcjij58wAxQ1YHl9QNh6vXQLLzWGLTdp3BbWbEzVep/eSxfN0jMf0ezJQM09CghcRzVhxucvw
|
||||
LtoFW672vRDM4BvIChowX28bh0p3dDSC+9WjyNyB00381GBaotLZvGnQ2KC3i+r5C6hCv/eWm3Tv
|
||||
GmRpYKye1mrLz6rUoZ20PbBIevJWXC5aDplyPpBbvKNt/37ViX7A9pvhbo5btjtkd7ScQY3zg7Ip
|
||||
hEVfNZKfSCe2kXqOeHnH8995hLaVtOp20Z9R0t1HRu6vVfzrZ/B+QMdWU3yEJQ0J9OFRIcv5Mm+H
|
||||
9LF3Qc92AbPWDxL3WSvtkbNPP1i1WIjq5XX9RJqz2pPACTTUxTx8wqR3qClcz+F1GLx/+cwCr/Cc
|
||||
8TQ3VDTxabKa8FZU8kpG6mvWkAlPHTre+gqUw6bA3H/6pXwecAWHD1eI45Vr8UWRyVGa9QcWXN66
|
||||
M36fhzOcleOckU2NioHNv50hm/fkd77FQNO7CijkFZ0t/Uc5zr/ZFQIvzZiXf6qYn4zgDDTDJou2
|
||||
pRtzg+LnL7+YnTk7pxq8s4U0pjZ4tM/reNQLjRs4KzvmPp+doLNlx9HE30mYyp+MngzralqNHjH3
|
||||
6H+L4UKc849/EbIeN1kVG6s3hNnsSbXvcunIzXg+omyMKYm5zYrHft5u0MPjK0rj94hYUM5n0Otn
|
||||
ylzTWjl1IiEO6wORCbl+9ZJb3E+gj9QjcZ2VU6qzLAzgWNp3gm9rpeweQ09BzblGlUz+tPRSNAPo
|
||||
q8WFLFJ72ijf9BUcxofLrP1CKvj9cb+bM573ePCse0uneKBXb7V/erTbncYIpnpgqZBYPOy29gwc
|
||||
2TCZneUrNOUvRdvnN8Kv826fie5xdo1s4Sn0fty2LV+sYx9mGT9gzVqVhTDq1EV0W9+x0R2kYrD0
|
||||
5R6GrRyzw2alomFtdBSWnfekkvXkWX82tTeYwb2jQjMO7TDzH6opFOaTZE3Hgqr+zAWJxkDH3cct
|
||||
5x9pHOAbGy/62qdtMSb24gwwiBux0FduOXslCZjPFFEz00Q2BurlCuZV3TNHc3A5xlBgyIouYOm3
|
||||
bMuxPEFqaGq0o2a+ncdiHZ4oCHYqWGydij8+qXcLe42l+WnZ9lvPt81mvJgsnPSzMJoXoLOSz4mN
|
||||
ky5m5UZEaFOLkMXecpV13vjMwXxFC8qlq4+E/R4DOFZnh+ULFmY81owGhU2asIU4iPi78McjEOfz
|
||||
ohD2wuFd06jGvkqP+L4n93ZIb1YCjZtFzGkuu3bopX2C5DBsmOVUl4LDvVWR7/k1WR7vpTO8D2qC
|
||||
dAnOzCuWcsHd05VCSdOS7Tg/xsP9cPARNDcZA+Em6hoJ16iRDw5dRuWYMRFAAPftZmQuoZt4IAE5
|
||||
A94fObPrLCuH9zfg6OdfZMc5oOGn76WgOxCXGiVi7269hcvbyFgUPD0072V4QvoJEfEwMzNhNuYT
|
||||
zrwcscRjT8iA91uknXOfBBdzjcbQXXLYnx8ROauBiwaa1jKyr7c71S5vPR7vt40BN+P+ZBtdT7Lh
|
||||
gLs9uhTdg2GpDeM/vn231BWxyAu1QjirmfE1pRuL7qtvK953eQb3bTqSaHXCqL9ppxl6fvclble3
|
||||
KhvLk7xB/JZ9KZo5q3KY6hV9zFFmAfAIKT4CHzXsjJh7vDzjP7y11/sPhQsj5fjpqw2Ui5aQZSlI
|
||||
K9/DSgbRH3Osndng8MV+TKFI8xmxg8/Ydu3xsoWwlr8//6QUM41xRM58x0LjtXVqP9J948a2A4ss
|
||||
7Z4Nl+3YgNO91xhsQy3HefnawCwbDsw/u1nc9vZbBfxV9mThLmIkHkJNIZSijIS02iGxVsYENYcg
|
||||
Zev9UkZ8ttoArOb8zJz49orH8T5s0NLaZywa4SQmf0VG88ZoWDxzponEnKSIPOoLZp3pOvOPfMmN
|
||||
gH0k/MC+WvLcTiqwg1hl/uLNET/VpobcaqZS9bHZilENrAHMqOzxWCPSsvaU2qiWwyNZGI+6EPUp
|
||||
OEMpopbOG7NCYuL/oFblklhweBfDkGo5youzSVbfW1OOV95y+OmtlTJ/tCOJLj7arD4Hqrb0WIgf
|
||||
X5xLw0jfk96VvXII0Ca9Xll8QB8kxgdKoM6nHUrNoe3wLZsApcO2I6daLcXE12q4bxRMwlpCGV3l
|
||||
qyMapncMaM9tVIrFEvbocs4kRkylzWh3w+dfvFmky1arxDXnoOObRRbfTZBxpf8cUdTtfJJUzy4e
|
||||
VA9FBsddTA7x+in69GI+f3qVniZ/5p6HHjXk6LUiyRovHNYYRWS06exIogX7FuI8hhvkF9KAeTa+
|
||||
2z69KHfDjfGHaoNTOKOl0MGwNu8bCT/rbzy+/PKI7JvuEJeWbUyNdmjMzLofKIT7PBM3d9ybU/8l
|
||||
1gTwk15LYFQeBVkVdtBysvC3qEhmJ+Lz6BwLgZvtjw+TuLvIQvhBdv3zIxNDWiGlTh6B6dwtlS27
|
||||
9uCIj+FUMPUTspj6MaeXIIJrLa2YZwg1e/JhCEC91RKzjq2bDWtrA2CvDJuE0tl0+rQsKVjmkNL5
|
||||
8e7EymluyLDEGqM8y3s0+NHomhFvRnxm3SJmp0HvwI4tINZnlqLx1O2v0EIUskkRi/m3u2tmf6M3
|
||||
8uuPg34qI5S42GfLTcXaHx8BoniLf+53sddT0Hz3/POb4rETegV07zv4vYz7gs6yMILpPNnGMhNH
|
||||
7IcmArPKgMSqrhXsFy+99XfM3+Z9yWp72aGvad6Yn+/vaMza+dZY2PWROJ4RtUOUJRRSMJZUmOsh
|
||||
E8bzq8H0zlR2sUMnHq3+SdF9R7cMP6qF+HpisM0VIhaWvQ0tufR+vGHiI2Sl3ANB74WxN/jHnrOE
|
||||
zkj585Ng8dhQ+qWuV3KveaSm+nz6eCiXH9Qr0O2N1Ly9qMGbOBuS+Jig1EoFC7/7rSNiJffhUjkr
|
||||
Op/8rDFtP4ORSNKO2TfuIjnm4R0Ufl0R3NN5y4OX3UAbNT0e1toS6b/8chbmjARm6LT81/8vwbwg
|
||||
Hn57YuJDEVJLr5r4po9G+25Z5vDtNnQjXd+IPxRrZt6Wtkdcb4Nb+Yb43Rhte0asg/8txYzcK5TW
|
||||
lkUOL1Y6nM+aI6R4Tqj5UDxHSOXChqM9+9DZqgVB9e1cg/P0aVKSv0zQUEfZE9oqB9y8N9ts+nvv
|
||||
P31WXkXqDFkf2XCY+Tc2PV856X38i9f0GWxnZyQu1dDE/8jPXx26N3nrdZAHLD7d7VI09m0Da7Uj
|
||||
ZP3zmwbpNRh2cr1hAy+tTGkW5y3Et3lJ7HgIJvwbjnB8VCY2Jr0wxurLhouxnTPLfNUx74aT/fOP
|
||||
//i06IOdBcci27BfPCd9e4dPdg1IKIqZ6J6zGyB3fd8QJ+k/jmCfk4p+em/ymxH/nr+WsXiklJDm
|
||||
Md2PnmJzqm8MsGKIudl9gyZ9iVncfYpRLEcMk19CZUWdi9/zok2GbObcogCJBNcDWJ/rm/z0Mpdv
|
||||
ZoRWYV/RFH235bg1F//0d7/FtdN594MMtJlFxA6Wi0xoYZbCebef4Xa7sYRs9lmE0l1g0HnpzRye
|
||||
2+4beCi7JESZ1/5dXx5mOn24x6rlz+B7hoglCfHj9lJyU3ZTCGv1y1xbbcS4S5+5ocf2mSXfkxLL
|
||||
dlvV4GWvhPlZ6WYjcd+Gga5NS/x7ESP2LjA3kpO/Yu7Uv+drRU9Q4M1H5t+52yrmOKvR6r3VyUXm
|
||||
h2KYJUcb3FC4LAn7zBnX8yQAfeVcyOqxzFp+lfUcXLS5MxI4VjHX0B0g3nQNRddDXA47fB6QH84Y
|
||||
fcet2fbsOXv/+CshR7RshYp1AxThtPTHxwVCwxOm+RSezWQrFmU+T9AXza5k9VBeznCYNkwP5BrS
|
||||
da74BZd2NxuC580n9ut+cMZh+WigPB3PzFvM1ZirR6sxJ/6Nd5MfNPm5b/jhp3vZto6Y5jFwW1oe
|
||||
NXfyG7Hx7FnwLr4LEie54dBTrWgoDmYFNofNMlYd0wggP9onElQDLhRk6xTmyZyTePLHVJPufVjQ
|
||||
04ei67wQg4jXHM69G+OZ/JDjyY+moHTRE/ef5JoxN4g1o6qaJR2SbFX0J/NrgayvSrIIHhCz75tT
|
||||
g9fnnCWDUcb9b55RWqSjPz7aqfF3C9FRe5Iz+b6L4VE/GtjxAYibnEsh1NChv/thCywH8bw+WWfA
|
||||
HT2TKK8xGpvFdvvT+3QejyKuu/fyjWy4H8nibKwzWjt5DtHhLJFMVbqWzl+GBh/uX1mwfuWIzULj
|
||||
CEk/25JkfTiWY3S55mh0DRcLVUnaYesfKYw9ZpOf7IlhwjukfNdngif/VKjO5YkmPUwPszURQm6w
|
||||
jI5LmU7+qI86Ptc3SHILypzOTjP1soMKxe8Yk2R/fopxPyo+JIemYXgVJ6UIlOgJFQDH5nK+i8e0
|
||||
Zdy4J4/yNz8q2WG2zZHXojlZqp8oU1/m9v3nV/pS6rZcuMcGJn5E/L6JEef9QYafX/K1ra7sLtux
|
||||
/sPX+WHziOkrNX1g3nrOoln7bYV02ajI2r7udNZ+k1bIn2EGknz7YEV5bcuxEY3941tYrb2xoHEb
|
||||
5D89OOXvPOYfs4tAWMIn1jfhiN7mZPPjl//ob8c0IiQGVcLivuCO2KvHPWwvwp70ltIOc/5MoCNV
|
||||
TyY8iht909hAd6sZCYs0QuLnvx/i8E7l4+Xp8H1Nt6hrhhsJU38n2rVT3g37U6/Z9dQVsfjNY35+
|
||||
euHocUYvqZ+D6POcOGYdOtM81kBvfzAYtoRSCv6VUng5mwuLYyNseXfDV+h2xpEExQPH/a8/6LO7
|
||||
ThKy6dtu368r82WdDsQq3rro6LEzwDQ+b9zUL6vklZI+zU3Qp8R7jzpiATvv0T2eeQxrWjf5W9T9
|
||||
mzcGtvQohvH6rH/+LltgX20nf9zWfvM30pFNOdTpgoJ6xgWJHi5vh9/8xvsaKgk5ou0gfeQn/PTX
|
||||
wohYy9bhqYMniCULymcy8WGWw3InNGIVF7sQtU0oMuibk8Xu47Y0em2w+dMroXLYZMN4lK8w4Sme
|
||||
TX7nz/8zfv4i7ua0HfibdmjqP1g6uTdnPGZLGdJExdRYbh1EP0rtghy8E4xU/ZhN9bqHImgHihr2
|
||||
bMfa5zOENl7K8F1Ni3FVHF34Z6Pg3/793//n71sQ3vXl+poWA/rr2P/H/1kV+I/8kv+HLKv/wdS/
|
||||
b0ugXX6//us//1lC+Ne3rd/f/n/1dXX9dP/6z3/XDPS3b/Cvvu7z1//1wr9N/++//u1/AwAA//8D
|
||||
AHQhrXeJYQAA
|
||||
Fxl5ip784+//DQAA//+kfcuOg0yT5b6fovVvmZYBA5n0DnO/ORODL1gajQDbGGyMuWQCKfW7j3B9
|
||||
ammkWc2sS1WVREacOOdEYGM/CxJDPN8v5L/92qVe55tE3b7sgizJeduzH/+mcbch28W7BENu3EVQ
|
||||
b5sZG4V/6xmkfQG2ymODRP9Kk0nloQ8PBjAwCiaSkK/gptDb5/efHgTTjQuO4DAzh4btfpevfALB
|
||||
Z86buPgwIadrPcKnpoRYzx7Pmvmlz8O722x/85V+MQavg3h7ERGowy6gzSNJVUW++xhZuylf66P9
|
||||
1R+ajKvOtm8HHhX+Poxom0oVo3v+1sCGHM5Y2zZXg6zPC93hzVPkxmcwJT0zf/iPV/0MZrc7SOp6
|
||||
Pur1L6EnZzkwoRr5E3U5FTJmzP4RXk/HEruV2uVLlaWpslv6G96dDtd86qMbB9q821NN32yNIeSj
|
||||
Ei6anlMkDdw6n/PvsGVtSfFdj/ovd/OPMB23G4zUdkmGannEcC+EAbWwQY3xZZ90GM7FGQn5MoNp
|
||||
+myPsGVdidj9ZfbiTXYILBbyRNPvPu+bqgWnRUlpGLAKUMPK+J+fi3U2CYz84pk33hHxRXFg4zqv
|
||||
ByfHF0h53Lb9suaH8q/fVsB//Y//h40C4f++UdCqpxveH8eG0cGIiRzh15legHsJllOwgzBQU4ve
|
||||
9s8kIAnXVKAa2UIRfL7rScnUFzzHjoHi+8qA7IcIlezsP3BgtWPN4od8h8ct/8Be8QnAMmdBBp/G
|
||||
E2N9O58Zg0lbAAO0PJLCx81YmgRosKwOLyLLvpXPfY11qAR2ja0KnFmXD587fG2+LdXkxQ7Ii7d5
|
||||
WEROslfIPg8WXipLGKiZRd36KfVd5hQQHHayhORwrsF4PpMIVOO8ULuhPpv3hRzD0udtejl3cj2D
|
||||
K3eEH5VT0XRW9JwgPW8gNaI1w+5SPi3XXgGcu6RUOzpWLRwafIae/DWxnYw06PjLO4Nal93JN4t3
|
||||
9bz3yhI0thoQPjXaegLPIIateXxjqxMVNrUBx8ODVsXYfTtLPXsSShV4v2yoM7HRGNv3qIGTpR6x
|
||||
42tWPvVdAkGk1ykN0u6bE+w+I1lVXy8cii+WvLvDPobX+nJFmyg+9HP3OmuQs8EebRzcJIMbDy7Q
|
||||
h5dJyjLX8gkV3wogKeaJKo2HevG32Id+9VVppDzPCX0RCcFxG+yoIQcvQLgA6rCLZ4GMEHOMBurr
|
||||
BU+XiiKhaELA2FK9thKbd9hEHmNLbBuxyotOiC3WtvlsVMU/+bQD1ZhMW18XoZ8pFOvnx94Qbo+i
|
||||
U+rFPdKivO6Scb6CFhAZOtQd5TJg4iuqoBDeGZoPsGVsLGQXJmCYcC7sK8YuJ24B1MYzWjKpBvMn
|
||||
qWMQHoUa78r3HmwnDQ7w8s3e2Nt1Q/C7T2jL0KN+COZkDOarBmtD6LAuXCZAg0tgA2sbGNib8S4f
|
||||
tENcQEwlG5uHk9Mv6nIqQY6DC94HYcvGzSQucNnkHrWFlDJ29vUBFCz0sH37GGyccr+FgqTE2HnO
|
||||
h6Qvb2cfBM4ZEbABRj/t6RLB8auH2Mt3BzA3+9sLduZXw1hQF2MezwGE0ni9Yp8PezZDVkzgjSId
|
||||
P1pD7dm1ylzo21mCSnHjBeytvGJIi0+KUZOIBmGHywDx/djTS33pjVnRHhK8LBSRbfa+1fPJX9Bm
|
||||
Z1CH4ipsjMkw3AyQEUnYRfW3nqPxpStRqmwQz0NQLyXnlZBnfkD3mxLlcxCdFBiEbkatRAbBFPET
|
||||
B5/uKSfN0baDxdkoIuDcKaU7GbJ+/uX39MjuFB1OjrG49w0B02mxySbdCzVrg4cN3UdW4sDM/VoA
|
||||
n5ZAi0c6KQPm9wvfcS7UuvSOL9lb7df7n4C6ZDPWZi3tqYK+Cxj24Z5Aq3j2U/SxXzCaeJ36AeuM
|
||||
aZI3BKCnvMcZp4T1KO+IBG2Z87Dxsop+uuKvDw5fUUF3/7MqYgAyIEmfPTVsttTz477v/vANf7+X
|
||||
ZNZVPYWYKjYOXP6Z06rALngYIY8E5ewlgv6VO8ilRYv3ZY0YS8KWQDGva2w03tdY41nBbcgQxfn9
|
||||
HUwP81LA4KPdcS5AZtDS32dgg8oH1SOZJW90OjZKmEYeNWyzyodjIBfwxrsmDU/xt55wHlcwIHOP
|
||||
fU4Je0rzzoVFnth4D/EdTCbi75BWG40a9xdXv19toAFQ6He6H7ZePSUprKBw1xD2PW3MpxrmGozP
|
||||
8RWH9uZuTBnKfNjiKqW+sLxrIsMoVDeoelCH3CtGr8RGQPceiJqbY8Wm6qJVUJF+E4GNWzNTb1sp
|
||||
fdwLdOmsMBDwfiNBRGCAY63ja3azbAL7RcvwpXyhZL7H1wZoL31L3Y+Cc6I23zv05N5E03Bocyb4
|
||||
iwlzJ/URd+kOxvwJ0QsOh6HDu8/8TGZuM7lA3SKVzEMugLE8+A0QnPZDtRsHwWSVIoIfKro0LK+7
|
||||
XJiSNwfqhiupHQZVsHScGcJzMObUq6AAFv4yplAWXw41hOc9f9C88lXE7i3Fu29Ys8225uBeSRQE
|
||||
QDWw6dtXnHpzyJsih/eCBdjqGXZFo+Mwy6a6b6+7O5xqCdAo/y716F+WF1jPT5SsoMYsFcUZql/b
|
||||
RSInj/2E2lMD49m9kUm40py08aCB7eHQ02DrnY0h9fK7hKSIx3j3HWoaTt8S8t+Tik3/Y/fsPA0h
|
||||
VOzogk2BvQ224r1cPo4V1XXyDIY2EEVItFuH9ZTfJi3YKyWYhPsZO0JYs+V5af/6M5n4cMjJ9CxN
|
||||
9aCVMfVOsg+o4Cs2FKJNhC1/PhqihJ8alEpeo1FW0L7RPRQDz3jvsbXWBy8aqgv5vZlgo6xxsqz9
|
||||
GwZZgvF+3kz9fDRfFdi8XhcyUdMKxmfBtb98+uG5MZ1vDKnF60MpqrkAsE8TVlCbJvsP3xeyAxDs
|
||||
d4mOf/k3F8ODQBZ1AinDChjf5yWN/vDfwr7PRu3QIegewxGfPkuZDHoUFNDTuhDriurXYyrGmfKQ
|
||||
4x7bFlPqDqYbG5CHbSBRSDGYKlzzYBzFFAd7i2NkuEkZ/A6jSoCYWYxqr6AA2knS6Z52Td51h5Om
|
||||
uuPzROakqvqZHnkOFmLg06B0sp4scUPAMhQu1a0F1AN/rxr4CccYnzpRAUM+njmA7fyLIHq1/TcD
|
||||
VxO25vlNDWsvAeba6RE4rs1h4/xV+g9UPA52WGTUa9xvPZnOZYGHe2/jgKLYmC0yHcGjkk26n6My
|
||||
/+UbGPKLTrp33TBW3u4uaG/9Hk3VhfWsCYkJ8CK46L3Md8BE45bJ94Q/4kzva4OMgCA4vL49meJn
|
||||
39Mf/u2i+5n6qe4FQqfCAtINQUTonK5n9Gjd4ZIpIwLY65Ohzi+SMvJIQ5yXGcFQifszDHPsU4c7
|
||||
+oCXorRU+SaDOAzLIphPvhJCr5AFjCSXgPGpygsk2qPD/s765rQd4hCYcwbRHNE3GLiZ02BQdlts
|
||||
g1Fms7upTeUB8UCE13MCy+nMaYomvGLshExPpsN4jKFyzMMfvtZ8s9+LCtO499qvunraecIEHxUw
|
||||
0ba+9MFk7eRUuQQWwOFtP9ZjEJ0keEUiRB9y19mkNjtOFZPpTHfwoLL5ClsT7nYCxFZSP+vpNBsQ
|
||||
bi19R/VnY4K5W9xMOeH3gI1cn5Pla7UQpGb0Jcy9hvWUWPUC+Z1UYtyPNaA7uinB7W6GNJhrmA/b
|
||||
+hkrB9/tsN52m3yqLm4JP95yQGKZlWDWuEMGH9cmp0alnpKlFm48tEYxw4Fy8gBPCk+BWjYgvH9B
|
||||
q5+6NAslCTdHqvNtWQ/nGwjBIGIbbYWwBuNpzgcoxHyOH43j//rvsBY4R1HkmoHgbmob7rhDRu3u
|
||||
IBlE/FIfTvGpJJPfOPWyme0SErF+Ikk4z/nQyKoGdSwKK5/c58OukyQoy08X7yQ61a3wkmL4wxcj
|
||||
xXy/4gkH9nctIsP+8v7j44C7RTwSLxgmVEHPBQ6vvidNmWmMv7rmHSzP+ITt3n3WFLmyBtuTpKGp
|
||||
957GLOfWEa71hDbzCyQ04q+hkklsQnP5Htk8KU4B3YtH6P6229XL5fTRgfX+tNT0P03P2OFTgMc9
|
||||
/eCDmffGeHD8Cor5s6ZmL0/59MvHBnhbuv/QKvnhM7Tz74MaiTOueGTcwcBfAOmoafY0xCSVfTtN
|
||||
aCy+EjB31hWBW3cRMW6czqBjOyJ4OSoeEfrxlc/a9NFlfm8n1NiUCHz3k6/DRt3pFNmIJkuhXHlw
|
||||
k7gIwZ3RMWbAWAJsOlnYw4oSDJ1PRXDjfRPN16tifJv9rYFendyQMt59xo4zCyH2RIxE67YztlL+
|
||||
4mHZfgN8f77ebLiSNwdnXhSpln+Xfiy3LxNuNouPFA09wIKFqICJvH2T/r5J8umQGaX6i/eKr4xx
|
||||
7w5B5XAPMJY9i5UP81H8+D9FKnF7OhgZgSbYHalR1jRnSVosv/6Dus0XsqVZtrwSbysTbQfpG0zW
|
||||
yffBtVDnX/2B8dl9NahmN0IkGxMwmW0s/fKDupmmgGXtXzA+FHsi2Kma/PW7nGw/OPjMJZjeu8Oi
|
||||
QqYffv2rnrJnJamx02/I9j0IycwvgQmq6+FMTWq9gpmFUgMsZ6qx88OLXV2E0AAdj/hRehrURAUB
|
||||
dYYPZPkAAuYMfRVgat6BXhA91UwNjQX20USxHSxBMI3wRCDk8y11XurbmNPTIwan9/eF/de5BgR8
|
||||
prv63HAjxat+6CthbODKL/FtU5Jk9K67F/Sf0YU+nPeGUeM0Z+rQGzH96TVWAxZBd7+16J6YYU/Z
|
||||
0r2ghVwf7+q0q+lJ0hTQ+iQkSxfJBtupT0V1AmODpvEe5Gx30VKYtsyieMlNMIvfR6rs1aGk+8Zr
|
||||
+/n+ro7wHS8PvPKrZKkRPCorX6GPeNj2i4JiXXVmZ0t96a3X87P76pCdbjvqaI9P33WHmwYSUXqR
|
||||
mQg5Gyrc8/ArlgHNldMXsIKFEiRloFD3FqJ6dpqwVZKyVFe94/VbhB4Z/PkZHllUMG/p21aA1W3p
|
||||
bgNewRIaXQPX/kbd8KEay0gXX13xj/D3NK7ZRZQlsH0NB2zvzC5np4u1wL46+vjE2jahzfgNwfr8
|
||||
hH/brJ8uxSuE5z7vqaVK5346XcYKvLZKvj5f1VPAYAxJvG44JVUVTDerKYA8MPlPTy1vpah+/RQp
|
||||
Z+UJ1n5sg0OfnQjbvn22/VrSAJ4ns6N671T1LONDDKaxTpC4T61goscR/fQD4ZZPBCZUPEtVh9ER
|
||||
yc5GDMaRdtmvX2FLQHq+NWAmgcXODghGvl7zjmy1sMf7BGufh1KTnz5EDb0iuOoRlvG5Aiz8nKgO
|
||||
hgws39d7goEQJFQvtCBgwpw30CqiDfaOg8O25nO8w5wIHxrmn6QfPQllsAu0D/U/ntYza3dtwTOY
|
||||
dv/gp+5yJeijhVLTyRcwOA8aKhvjPFAb2hNYikV1Qcw7d8KVY5HMHy0uYTgsb2pod7futZdRwO6Q
|
||||
YcRsU0/4JpsQPF1Kih2hUpNliZvhp+eovcZreZPyDsaqfdMdEQCb9kZggrjoNCR+zZMxX4Ugg1yn
|
||||
GRQDSw6mRhY0CI/jB2XUfPVTCNUzvHPDBd/uKg8I987uYP85yjSU5Fs9qyk9g4/sz2hOpCKY2usT
|
||||
wdUPQUtJQTKeX2kLPF6UCP+6GLnABa8YFvnBpo654wFLwpLA/n0I8At5DEzJYMXgx2dz8niCDiGn
|
||||
hc5miPARbjf9yo8mOC3GhJaUvyTsQm8hOO+bhu72dWzM8TWMYO5fAurluxkwbXtESsjcPS76G6jp
|
||||
LF84KHOnIzYeuRvwgZmZwD2iEYen2Kvn5HHlYLUrPbpb/ZuveKuPyvs0ltjutzNYRsOHv/smOzPv
|
||||
A/bSPR1+Xt0JKXnwyEmzbERFuCIdCQ5u2FgLyATzdt7ioORewUiEyYQvu51wtuPlns2On8EHMbfU
|
||||
vV8P9TTzh7P6ak4a9pKXVrOoafgf30ezXmT5PD3TI/Cic/HL74RdgdTAVR+STRXahpASgcBC2Zyx
|
||||
cU52gXg3OQiLHXenJneTE3qz7AFoQhNjd3N6gkk0VB9+TGjRq30t8ukSTT7suQ+jBhyXgB6tUPv1
|
||||
b7zjz1fGzrf8Do5ULmj4NU49k59KCVtc/ukH8MsvGGNlRkxMP2wx9VQCh7dO0CC+WD6ZoO6U1e/E
|
||||
zh2bBm/t5Ew5iI2H7WLnGz8/Dh6SR0Ldb/nNp5vmmjC0CcE+LJv+169A+SY+GaerCJZgbie45jvW
|
||||
2+MbMBZ6NtysW8Xu820ly0l9VHDlx9jZNh5b+YQGozCY6H71B7/9JYNgPS/d3SOj/+OPWA4pNQ29
|
||||
CEYDhwgObrqjF/Ac2YKoP4HdKaro/XXRwfzjQ/5396K+Ono9+flDHYxibH5Kue4sBfqwLhsZ29CO
|
||||
wAxDFsFiEJ80OG3XvWQZSjDNugiJzHj3w5qPULxzHmricjDaX39VHs2Gmpn4TpZXGy9gN7kN1RKC
|
||||
6sXjehEeKSjIkz/LYAoj24cn/BloaG+4YAEf6Q5sK5Hwzx+e32xjA6QjlbyPvQvAz2/87o3xp5/z
|
||||
QQ3rUg1FB+HwA0jCHofOBNOym3CQRbUxeYP/gvKVl2ggQBbMRTOf1VWfkC2564Av/dGF4XFb41Xf
|
||||
sqWv7xP44e1FqG75kKSwBKabGWSRJ6eeUtFvQQXTEGvztjKWwt5DsPYzar96LhmSlK/+/G6w+lXk
|
||||
6vZQWfkJqp9TnCyPV22C42FqqYnpWM9rvcmbb1rjQ6Nc2LStvzF0LwFZ9cmuF39+gHfleGo7248x
|
||||
DnC8wx04X/FO2Nf5jAnT4KI+OeqSdkrofdNG0BT3Lv7TI9x9jOCvn//0zZ+/YdeviqI8vOerPxNC
|
||||
2zpIf/xs5T8SAIt9ppoeD+C11RtXGZ3pufoPHRuPFzWC43IW0HZjAOMNFPn+82+QFHlGP5/uPgSj
|
||||
JZ8Rl4km2/7wpto2GjV5CHoGmzmEKx7QkJNI8OdXNs8tJv0Btsny3skxPLdFR4uvcaonFrY81EKH
|
||||
x454tRm7Hj8LHItIpft3vDFmeddISnrBHRkbKQU9+gi8CuLURlw1t2xUnommrnwdMS4JmPhN5+H3
|
||||
PCu/fbMfnoCLcSYY8c2znzfc01WvxWYms7rd9qzs3RJupw0git4bhqD1WgvddvTQ66k8GDtd9gtY
|
||||
9RM12jvLBfN0lcDKN4jMvk4iYMJ0WF+iCBv9IQpYZUQ6iKQ3pn4ebJKf3wXyEJ2pfhmiYKJCe5ZX
|
||||
PYmLiZP7uVB8E9zCaqa7Kvj0LLgEJlSefo610/HH57ozPBl6gn56ms1sfsGIRg21ZM9OxsRiJQSL
|
||||
eaaOeG1YA2UFAV2QRoqSQ5bTn//PzfOBWltf6KdV78v896JiM+C//XJ+tek/fioi5/5bwWpRm87c
|
||||
0XTOx3zOHQv9+j0O+uEFfv6a8sO7zconh1XvqSt/In3cS/VCXa+DCbtxiLGQN2Y3Zgos0HOLPdl/
|
||||
J4sBA/fnTxPpFpL6D19v41Jg59zJ/ZyNZPjlC/7hlzDcyhbExxxhz77CvOsv3QCCh5VT37prBi9X
|
||||
WQwTNQrQIvl78BK6ewd/fkd5xWE/PSNFhKaIXRrodpmw4wxCqJ0UHdV7twALF/AaZHGp0cMGmEaL
|
||||
86yEu+/EYXOrucaXW/YTnK/cgq3sUPUM0bj86UG8o4OVLKYeKeoJ7gL6w1NhHsMUWnyoY+uaXQPG
|
||||
Hgf+h2eotvYpIBM/iX/+iu2HczJudVWBqVAecTC2aT7jz/0MFqVRqXMAXr6ADNl//GOdBzK2bakk
|
||||
r/FDvYM+CXXt6Awemy7GKK46g943ZQx//HUOkANoaChnyG/oC518vWZs6/U2TBugoEryRzZZuzlV
|
||||
WvVyo6v/X5NdzQhkamaQhi0nNr9aQ4f3KCc0nIXamA3AMtWtJRV9e1LnZLaVFoiyuKdelMxBz/eW
|
||||
Iq96HBsrHxR++bPOv6jvUbVfYJkskGs+N+xMbG98YjvR4D1tyxUv9/Vi73YS3JiKQ6ZeOuSL4Wwi
|
||||
uRWaiFqr/l7WeSbYgeMVe6+Kguk0BxxIszZa/ZCXMZ3MYgFefbjhcOKuPenr8wJ3SFEQlxyUfLnU
|
||||
hxTWmvTFyZqfU2BmNsz9U4CdxA2TrQkMBR7S5EHmNR9YaCxnlYyhRI2XBfsfngOubguKH1cvGCCz
|
||||
dJgkjODgtkT5QnaMg2MRqySehbpeyMkcFOkRv7C1vtQ4HsNdCAv3bFHH1975bBHpCKvtS6O5bpc5
|
||||
bYOLDetLHOHdxcSMHgZYgdf1M5P5ZpT1LHRNCgH3pdS7nYuAFQ/oQ1s/bzFy+G+wqM23gOlEXMJW
|
||||
f519srSAg+ZhbF34jTGdz/fyh9+//pIQo10y9XyLRKxJb8OYM3C1AazPO2oRnfST0N3b3zwK7+4b
|
||||
Bla/OFSD6eRQr/j0gFnlvYDZrNVUe7zEms2OnkKjOofYtblHsFwrP5OzobTxYeWLv/xVmtt5oet8
|
||||
DzBd3IegK146vciAD8j0bG34/U4QB63cGkzbFuHPf8BJ4uxz+n2pFWTxuudxR2ZPjZOcwed91PH+
|
||||
nJT90r7hBPC1epDN4cH3f3yhMOUH3l+FPWN+JKTgLisBUZp4a9DLSZzg7Rpp+OZt3ozZ2WTDn5+4
|
||||
zhuMibu/Y7BZZ9X+0sxgOU/HAvx/bBSI//eNAtLXLdpowbVnu31XwvHgG9TjSy+fwg2bwE7vUmpF
|
||||
gVYLZvbN4JyzkPr3jdMv2WnUoIE5iG5fd1UYfCLCvHUDfB+2pJ6rw4RUcsIc2Q6BDQTPKW31wn1V
|
||||
bPP805j0un2BRTYktAHVw+jpqGnqFso9Nf0yyhn/kTioBzrE5ln6JGOWXDKIbydGrhgPPfE0GsL9
|
||||
Xd+TL3x+6oHtPy3cw66m1ucR5AweiARuynmLXpX8rvv+murQ6+IQqWwP62HcZi4om2ShJs2OBoPV
|
||||
XKj1PLoInMVbz8x8OMOCOiHhX47TC+dtHUP6lEOKTqHP+PLysOFZaXS8ZwAFlK/0BcqBVlDrGT6S
|
||||
CR7DEhRTOWFfvjlsMuZYhN01OBC5ffTGxDdcBFvhBbDeJqwmhtGm8B6eJGq3RWfQBXgi4NrujuAB
|
||||
PYPFOXQh/AbSm16JafXMUK4vCMtwh14mEFjHbFJBM2x9HPdQqVs/CQdohp2P1vizuZNTBE2w5YgY
|
||||
CzEYlDF4Qe/xleheIX49fqvHHfafSKfXU6AZrEyvC8zOL5FwMn73VDw4C9SBLRBlhjKbQmcjKdsw
|
||||
PhJIT20+vU/tGWTCPsRafdfqCV9jXeUGR8eBIff5YFu6BKdGCMmW8iMoR39pgVsEOUYLezJWbbIW
|
||||
pNf4QPF8OyaTO8+ckpGzQV3LHPLF7SQerNthZCbbik23i1NCEDsR1hdsM8F2ohLWFyhjL//UPdNP
|
||||
TQP497Shvq32AZNACwFpHRNt7qW+TohfFbTQaP7uu57Ps2qCSfVFvPcKDNj+/OLhc9loCFDVzBdb
|
||||
ykuodcBH0+eTBr3n9hq83XwTu7Tu6hH4wgK4lcEc1/tYCiXVYfAQatItXZAvAjdD6G3cBN82MEwW
|
||||
bMNSEZZiT0NhHgKmwvUdt6RJ8O7sHFknOVYHDwWnUeRg2ZhJWQ5Ab0KCQ/EAc7JMkgt+9WjCOO+n
|
||||
ZSgkWJ/rMxLdK86HvDoe4SBwB7xjWpbPO6QWcs8FPvW5Y8RIwMI7UMttiiCOx5poatVsmgzfafBO
|
||||
Nj2JWhyCW6WH2AytPqDJiwuhXROO0KEJEv4CDB7SXDeoMwQNWwKu5SFCpk0vdpj2Q1NnDbgm5pVA
|
||||
83HI1/ja8OYKOXa1vZELlbJNYRTyiNrHuUkW4NxCYCFqku2VmGw5y+s7b8i29zJ6AmOW3OgITpvz
|
||||
BfEXXBr0s5EnqM2hgB+gegTzUZd8qMt9jO0tPfeL0kstjKDkYPTcewZLqvYI6ROERCbmux8R7BYY
|
||||
zqmMT4L1yqlz3g+QO14EJOtaWE8vsS9AeMYOmYXiboziCehgN/IetaVmw8h32t3Vejq80dNMzXqq
|
||||
cPUCVnpL8F73DGO57AQdJFn8RuJxthP+dJV9+Cz6E94d5cTogHNDULmSHjuvTmLTxRV9OD7pkxpH
|
||||
LWGzK54KqPDKDTuX5ZDQOKCNvLNtSDWvjNjAX0MNADXZUf3YYrDsP5sOvLvgTQP7RI3OCXgNMHyq
|
||||
sEbdJpi+6X6RGb5UeMdID6ZoWO4QtOtKzxq/JcWfWDk/sog6an1P6KgTEeQH8UnUqLECMW1KTcXW
|
||||
zsW7yXqx5dw2Z3jXbYtaD+HNmEuLI8wIeNLA4s7JMhpzBa1lX1HvrHvB4nGHCNyqd0dYODrGgmQd
|
||||
QUkcGda8rdeLV0UrYJ5EJj5WslVv8/cgAYN8YuxVxieZJqvQAa5NjWL1HvfzQS8b1fRmE6Ndd+9/
|
||||
+Qeud0YJS8k7mbPkkcLDZc+TzZqv81wjF0iie8JW1E+s898ZAuHV3NOb4EUBL+63CD6z9ojEHrnB
|
||||
KOhZA8XaeRFlZHo+8XclhDHoMNlocm+Q0XZSqJZCiqgWXOtxgnv0V6/+Xt4YUwelBXLHk0C49ygn
|
||||
ExdGuhol9IIa94qTz6vJmz98Fxdp3y8JAz6Uis2LBmkuJctnq0JwwvEdjSe+YUt7MDvoDE71h6cz
|
||||
bYcFyN5LJtzORzk5ZbwJ7uFFotia+GA8D0gC/K1ISD/cjmDgwlSDXrg/IqBdARiK3SFU44MBscW8
|
||||
CIyS3rmggrND/eR+rZdO2lfgUbETGZ3tyWjf3KcEQUw6vOaLwVY8kJ2+NOlOKO7BeNQlF7rCOSRg
|
||||
uboBNYiKIJWQSJg/s5qp0DVhf3cOK748+mmKpgx6m3HGe3Jm9VTXhgaBN71oSE9twl4y1CA3bXzE
|
||||
7Z6fYCYehcpuFD0iZ5Wck07al5AJo41DvKhsPp2jRq2vWUEd2UDBRLfPAeph8VjxWjKm/LzhwH3I
|
||||
Z7zfoIz1ryZp1BXvsJXfR2NuKkeCWwh6NO9gyeahojps6hTjCHivhDgB1CDTwy+aLC81aFSl0y9f
|
||||
CCdTgbH8PSi/esHITqOA+edbDIfjXcBrPeRDMCRHeCjTN/bLvVfPT+lbgTYXB7TpfKfuT7fdoEp2
|
||||
r5BNGF7ANKruGQIQa2Qz23k/6f3Q/eJLTZAveZdhToTXDyxxMn86Rs9T5cKuVB6IlakPRi7YH+Fs
|
||||
SiY9YrQDC99fKxgFaY/T14SS6fRUEWQ6+hJusaJksR6hAg3xXBIpf5Kgi7OhUuLPqaGIuzkJQ+qb
|
||||
wO/j5GFTMwc2fkcmKm9SIPLR7k0/AMHp4FOXHWpzcgDWekGKCaIS24/9l03RJ4whu1oz+hb3AUzG
|
||||
xrIhmrwd1tJj0G+bpnbh3j5oK/+bcnpluAFFOlcUtZ4a0G3rVvCudRXaHsamnudct9Xjyx6p672K
|
||||
H55PqvBNCuzpFysZSFTy8PpNDZw9HltG0/fpDFur1ql77t7GIElRBkdFOWD78dL7KVCs1x8e6xsH
|
||||
GYJAFhMyRiy6Qzeci8wmJciVICTy+bpj80d2W1BfVQXrzXuX8EEz+pCawxWfB1Yxyo5RqfaPikec
|
||||
g+Wg/XBbG+T3TYbqFzXqqdhdQ9AJS4PUXG/rqQAVp0DBdGkW3EAyp/chhE58OSKVjENO9O5KlPe1
|
||||
cWiwjeZgjr5XBXTb0MNo55OE5reSh03e76izP4+MUWWJYZITl9py3xlkU4JUccu7RYB6gsaw8lOF
|
||||
SqFIuG3XgbkDjQna7RsSgXzLflHJ2VSwoyAkXg9ivxTiZgCnzfFC/U219Oy2bsgkQnHBj+BE+rF5
|
||||
t6F8QXpD0bBFPT1dkhI4HGxRdhHinGnk3QK+AjLWlcgy2NtKi7/78HQt7MXjbixAOJQzxeV7H/z6
|
||||
GWyecKD7NT5M20whHL1UwM7Wyfopep5NKCcnl2qHJw7GpN+cgXGOPkjUqAda536ogGTsz9g1XAkM
|
||||
weJVcNU7RGWmZSyt5za/fKaulVvGfN0qIlj5NN6veMte/J4H4pvr8IqnBpkf4wsKlzhHi13ZNV9M
|
||||
6AUvn0XAhlUf2Bf46gKiZLxQ99bIxvytLgUshHRLcdyCfKLb76Dwahn+7jefSFSKEHjLi3CO/azn
|
||||
7Te5Q9eKEmpln1ewXBW3gCRBKvWPtRksCkHVL7+onhgn4zVZhQYkKv4mIodglnNpUVBSD9SsqoER
|
||||
zhkWsPJ37EX8JyFXRburWif71EztdaKAjeLHvzA+zHHyCpR9A72Eq4j0dRyD7+YiBckcEBwsOs2f
|
||||
520fg6e17AkJmhlQt95ycJQLQk1V2xttuAELPFwwj/H9K9eLttghHH0xxaaxN2qRSzwXprVeYvQ4
|
||||
CPXwnEYCxWyRiJDwn57c8m6C8n53w7tI94PJjMcXvMxPk2rn3SZfymdZqtySjWiytLInazzAe9T6
|
||||
Pz06nK6zD9d6oBHb0GA6HXUOGryiUj3J9mDNXwKO1ddH7+J0TtjwLEwl2VkCKdNj3y+7Q2BDLlku
|
||||
SNL2dc6UNjIBObYlUobLJp802TnD6cgH9BLvRTAdlIFAZ7AqstGqJRkLVWqg6pYDYZJy6SfOfooq
|
||||
E6iNwwOZcyLanAk3JIBkPn3MevvZzBP8BsqbvM9Rn8+hvisgnNgDa+DL9wt9hyFUqwgQNZFYMrvi
|
||||
7Q7Vu3imhmSgeg5gjmCSDy6NvnVfz/UVRook+ieiZsdtwA7elUBGrzkNtGv+xyflYacf0GZ7dfrx
|
||||
aNm62s03lXqrfmZK94agELIt1lE4BLSOmQ/ilnk0sJx9MlhzlUH17e/IsrnbgOnN7ML0VRg021Ev
|
||||
WQJJ6YDXRSHdsQsLvjt7TiE2Pm8CvZEZy9B1onJ+RSkqz7jsp+ihhbAzE58a3e3UT+PmHALe8zqq
|
||||
Ga9bvsCyF4Ft2S120rI2puYihkDewIJaucPni3m9E1iTqKanZUmDqbxcbAC7B48gXlQwdBvUgo6/
|
||||
GMTx6zmhzIUuLI/xTE1M4mDCLi4gOqcL1dskqafm6y7g518k6RaC6afvN+5wwSZRakCb4XCEt0ZJ
|
||||
qO9WFtiOPKxg9PEAthBVE6Z2agWLpZ7RZgksxkN0PgKpyGzs3tQDmD3TWeC5ePq4EF0TTCRqeaDf
|
||||
H6sj3cjBXD5iBT6UsqKxLIfJdEHDGdzy4UnRpveCP75dauIea/gNesaMPad81c2D+uX+27Om5DlY
|
||||
HqMZ+/srAuNDunKg+p5r1O8fr2Sur3wMlkfyJYAz9vW01iv4qDNPXbj4QLABtEFHC0DN9FYFf3ir
|
||||
H84fAm8U1/NnfMWw3vUYOzXDPV96Lx6yMc2QVNDJWHbnOYJ5lHFYdz9zP/Tp7Qi9lv/+/JOacRJd
|
||||
AC6WE/WU99FobV+2lQc9TtTXpDKZbse5g8bQHBDUFbGet/U7hlwyXahdmEnQj3ojQvQVznhn7gLA
|
||||
nkyMoLfxE+yR1wmwgzCHoLu4ET2cHR4s3D6GcL9dCmoEj3cwz+UUA0c7J9Sf4ZWt/goPtp3S0YAz
|
||||
xn55bHEE8LO9ITqoprH98LdMcelng57IFusl08MX1N1ApPauWcBybVUJmC9OJOIzPrJZdLUJrqum
|
||||
aG4B7ml/jXTQ8l6Kd8qzzVl7dQtYM78n2059Abbyfyi+agdr8NLk0xRJGcjyQsX776Or5/vSL/Cn
|
||||
t/bC9tnP2L/ZIN5/LkTsSZqzH1/cbqaZNKve5a16ckEc3e80uIAPYPMThLDNrh6uJYP007fuXBBN
|
||||
xwFfW7FmK19rYRkLCHvtBiRkn+1TMGGHQ1J19Gu2c+AZ3IpkQ7Eq9AkZHqj4xZv6Mq/1QtAuC5TR
|
||||
Q8O7b+wmizB+UuAPJxuHr2oIJtECvrKgIcCX4FCxMbqp1U+vkuvqz5SZZxGF9997HB7QzqCdkvtK
|
||||
H3Ep9nf0m7Ni9mJg55sJLcnc9GN0E0rFDNCHSJORG7MmkEnR4uaBvc/hG8xvu06B/pANbJK6D4jS
|
||||
T52aaOWFQO+cJexhzmd17b9YWwF+1WshnIVnjve57vYL3tlHkIfcFduLXwSMoe7448M4GG48Y7ab
|
||||
3P/8yFDZ7IHQhk9XNUpNpM7QXwz2UYwXXPsJ3q39eCE314f3drOnlsLEpFqmyYXio91QLe3NZDpo
|
||||
MYT6XtGxtylUY4zqmkBNnSKyTUsjEK5bhYcOkihZkmwEk+3Ppuov3YwKOuwCep3kAeqBBrH24SIw
|
||||
X4fzHfbQ9+iqiNn2O5SSOj7IA//64yRfax+EJrKpE79o/+MjEAvW7p/z7s5yBCXbLH5+UzAPTH5B
|
||||
crYN1DjBmBMu8Xy43ieNNTU02HnqfKi+EogDUZZy+ouX3Nsnah+zsaat7gzgq6oPamfnEsxJvz0q
|
||||
O71NsWEpfj/5SUhgBBWHMPUwJUypvhJc30ylN90zglkbKwLKEzlS9Hzt2Ndik67uAdYQb8WkXjbN
|
||||
s4ErH8F7oXQZKXPlrCwffUtDwuH65yfB3TMm5EtMq16s7hmpYlXZaKqdDxgFOJyVSH28ibJ0QTKF
|
||||
QRqCSIsY9b7no8ECIbPh7WXsyXb1s+ao/0xKuNmcqP5YTMAHi1dCYbnvMRrJtl/ct97B3u9GNB0k
|
||||
B8i//DJ2Kodd1TP65df/b+42xxZqLLbyIR+ItfVa+aYNZr3UNHX6DjGJN/cGLE9B49SHo1vYtGLU
|
||||
8w+wlMqs6xzWLva3ZhwuXyBqNQ1f3rQ2loXrUhihLSbqU7AMtql3Okx17kO4fQ8ZkY9bCRbrp0lt
|
||||
bCcEU+snFexfGURdEx+T9e81f/qsvrPImJLR1+GFsx90fb561fvoF6/1M9gKY8YmkcDK//DPX52G
|
||||
Bjdy62YuDa6lXrNOf8TwIA4YH35+07R5T4oe3h9IQY6WCN2uOMLgsa2xHkzuin9TCtPnS0XKqhfm
|
||||
QHzr8KYct1RT322wDNNV//nHf3yaje5Jg2mexPQXz1XflvCT3F3ssZxjQ8U9IDAPZYyNcPwYjH6u
|
||||
IvjpvdVvBsu3+GrK7hkRjLvneh45Qupa3wjCPQXUTMoYrPoS0WD45DNzZgRXv4Twgrhlv+cFcQJ0
|
||||
ajx8F7AQtRPUPvcG//Tywj9UH+y98UUi8D3W81Hd/dPf7R61xmCVFx6SjvOx7jq7hEleEsHidOZQ
|
||||
f4w1xqtj4oPo5CpkW1ucsWS62cDF403sgcTq/36/vnAyeZrpq18q91tAn4YhtoP+Vi8qb0bQa8Uv
|
||||
NXWxY/MpqjJFDvR1Y+kqBLzev1poJe+Q2kltJjM2G0UB967HdpkHgDY5WpTwau+pufbv7UGQQ+Ba
|
||||
25na5WL2gjpzLdg3Rxnf+OWST1yY6tD0mElDb0yM+bANXSjvjRveP52kX+68nEETxCXFrqHlWwmU
|
||||
EAbx0BFwvwT1dELFBGyPo6QJerUfacU1P/6KcQqcnolIVqDAjJ78+DgDYKrgOp9CHMdrAauzbQi+
|
||||
gLvj/VN4G9PFbmN4wXePHDLBzpfN6aFDt3rYWH+XF2OenGcH62taUGu3FYNFTLVOXfk3Oq1+0Orn
|
||||
NvCHn+bt2BtsncfAh6NZRD3xDaBzYWmwyb87HISZYpBrK0ggcLkcqVPsBKKhKi7MUv2K3deEcgHo
|
||||
MoHbcLvgYPXHRJWcbbgj1w8B923OJhYcFliMZoA4/skHqx9NoDD4FRo/4T2hphtIyuvVOWQKk30+
|
||||
XtWvBnl5X+Od+4QB/TYLUZa2yGg4KXUw/uYZtYYH8uOjgxh8j9BPpQoX+Nvk07N9dvC0TBCbYVEz
|
||||
JnoG+Z2H7hDvBtv2qhUQDaTAftYiMHe74/Gn98k2mFnQDo3TAB2WKd4VyiEhrZFl0L8UG5yIwtCT
|
||||
7VuR4Gex79Q9vDNAOU9JYThyRxweLmk9+7d7BmZTMREThbCfjnZK4DwiuvrJFptWvAPC91BgtPqn
|
||||
6xsqFVj1MLlwB8wY3yEepA5PVn/UBsOylWOwMXNCjUGPEvF2gi8QNAHC4bmo2HyeBRuGl66jaB+E
|
||||
NXMFv4IvCBekOttTMEc9XZQyfNa/+VFNL9wxA1YPttgRP34ivtVj8+dX2pvI7Bdmph1c+RG2xy4A
|
||||
yzJeePjzS766NtTD7Ti3f/i6vcTPgLwj1YbUOmypz/Xfnm1usQi047skXP8Ne8Z/Jg5u+McHCcL7
|
||||
WM8d6/Qf30Jia805CXo3++nBNX+3wfJRBx8yjdlY+4YLII8tjn/88h/9baiKD9gkbhArd4vBzmJ6
|
||||
hscb01e9JfTTdqlCOODXiFc8Cjo57nRITnsOe3nkA/bz3y+BVxI+vVXGcm7JEQzd9MBeZJ9YfzDq
|
||||
UtE/7YHer0MesN885uen54YcJOQW2RlkY5ZhQ209Y53HKqCxJ4UijQk1W76bCL6N+EaDQPH6ZXig
|
||||
OxxOSord/ImC8dcfZK6UcYjjsR/O4+GlvrXrBWt5I7OBpIMCVeXToK59a/XyEqJKjd0xwlYzy4C6
|
||||
tDiDMuAsiiRpWP0tYv7NG11988yn+V61P3+X7pAt9qs/rku/+RsecFxPbbQjUCxQjv2nufTTb35j
|
||||
fRURewsg/bT58BX86a+d4tOeHrzrACvIHOrWVbjyYZpB58QkrOU3PWetjglQSLPg3elj9sR/x0j9
|
||||
6RVPuMTJNKf8Ha54irjV7/z5f8rPX0TDlvTT0pABrP0Hba7mw5jTxOFhFIqIKM7RAOQjtCbk3SZE
|
||||
QJTTZK3XM8zdfiKgo1U/t/bCARBbEUWlGOXzPk9N+M9Gwb/9+7//z9+3IDTt7f5eFwPG+zz+x3+v
|
||||
CvxHdsv+g+fF/6Di37clkCEr7//6z3+WEP717dvmO/6vsX3dP8O//vPfJUX52zf419iO2fv/+MG/
|
||||
rf/vv/7tfwMAAP//AwAqDwRBiWEAAA==
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 7b50c20eeb771678-DME
|
||||
- 7b3ba19cfce39235-FRA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -387,7 +378,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 09 Apr 2023 06:40:20 GMT
|
||||
- Thu, 06 Apr 2023 17:08:10 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -399,7 +390,7 @@ interactions:
|
||||
openai-organization:
|
||||
- own-45h3iv
|
||||
openai-processing-ms:
|
||||
- '331'
|
||||
- '247'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
@@ -411,12 +402,12 @@ interactions:
|
||||
x-ratelimit-reset-requests:
|
||||
- 20ms
|
||||
x-request-id:
|
||||
- 1931f0d4b02fd3de5f2966b3cc43d4f4
|
||||
- ddfb63c63597d48ade18afb82091b9f5
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input": [[939, 7341]], "encoding_format": "base64"}'
|
||||
body: '{"input": ["sharks"], "encoding_format": "base64"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
@@ -425,7 +416,7 @@ interactions:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '53'
|
||||
- '50'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
@@ -439,120 +430,120 @@ interactions:
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1RWy86rvJKdn6fY2tO0FCAEF2cGmDsBJyE3egQkIZAL4WIbfF6+9X2/1K2eWHJ5
|
||||
uWTXKq1a//nXnz9/26K5lePff//5+6qH8e9//cSu+Zj//fef//7Xnz9//vznd/1/yNu7uF2v9af6
|
||||
hf8e1p/rbfr77z/S/0b+D/TvP393NX/Rs3mpi4k81zdEz77HcnP1KjjHE8DdKAOW08BvKFGnkyY9
|
||||
l0/mstOrmdwoPED9tptkTndyxL/q10bmx3cIPliTmMqjnCBLXynEoodtxDdWsQe2+vbEIV+BhfFZ
|
||||
7+EkhedkkRjHno9EHmDpKhUL3Y1UTLt+cQMjjnTitaupaDxyvUG2tE/MaTSrmPhcaMjBqyOFexoX
|
||||
HG+2HGZUyYRIuwBN7+n6RAwHt2TBsjb6evPuBopbyiy+X/KeXiNeozYjL+Y7T6kRgVmVOovaQ1KZ
|
||||
qR+JzEIzZJaTshi6Nv0E95eF+vw4s7DUjFRhRR0CYTUmlok8NJHndIPT+fUhOAgeeDKceL+uFT1K
|
||||
uuX9KObDNl4gl5tmot7wKLi8QD6s5u5BAc1uM0tg7iEsd8Cc09qOaDafTvDDD9kI95yO+Oza8Ihi
|
||||
wYK1PUXT1/EotAhqEjVlGIkX7SngZ5IRz7vd+vnxKCSkbCmQwPU+ePLmrETpOjkT266XEdWPagnl
|
||||
W3YoOi8YFpfi+QZN9zfs3LtYzOrKqoA/O8Ycx9n1QmYrC46+nDLznO9QL+7shrRsgUhyvfGeBvEp
|
||||
h4cdLhJwJQlPyuF50i/Xl0McoCGSku30BNf6fJPxvjs2fPBVCni35sQ2ihwLe+lIsLxEJTPGV59O
|
||||
z3q5RdfivKcS2CIdU3At7fo2x6T7qb+sqFvQ84xP5AhVJeZd01tgO/qXGL1PezpyfgPPuB5ZWB6V
|
||||
iBeNOqDXpjqSXEJSMePvgYL5CR0WHjuWCtVcqfDVqEJsq/H6+Vy7MziVlCXy9mj10sM1fegOC53q
|
||||
Be77mUo5IE0PN5TzIMZMNdwOzOtpz0hNYizO8zMEd/keCNHkqBDh47aF9IRdEh/eDM/lO3/CPi6X
|
||||
yRxntWgv4e2t6aq7IeRrqujb2PMA3MQhnXljCSUqpAEUlJ1IxOdzL8LHaas13krQZU3iiM+8lCD4
|
||||
TIxs7ps2ZXp9NoAHtUEil7mI7sM5gYmIK7PmQsZcsRPQzM8zI8HOA9EdS2EtWbncss2jrjBX3uEB
|
||||
LZEwk1VWoXQy2rUK30kMiRSXajrHtuzDCKrB7O5ToSn88o2W+aPEQrCrYurlR44oa9SkfC2kaHjU
|
||||
5Q11tnskgadKxVRsvyVUn0hL1HDbCBE2XgWNJwsaHAcPc/mZuWA4nURfLlsLzp0+gT7d6iRY5m0/
|
||||
93UewuBNISmGa9uPTiNCyI0sJTfFTgQ7jtuLfgS0+amvhbiTHjlcV5pF1zuvROwhuhLJAb4S07zn
|
||||
RZcddhw2z7edNHSvFvSubTpNk4hF9Wvlp2KfLvbosqFXtpkSB/Prd3P7xZNEfGbMY0efNcb9mvhs
|
||||
8SqmKi4WAFN6SqbyEKXio4cWHBsvJ3GbZZhXoNRQ3TclOY9N1UzFfk/BG8sLI4p1jsZeflx0dVOm
|
||||
5Pq0w2iS98KG1tO+LHyYTvp+lVcbiQpjhkvZxdxrxgMonZszd+NUaA6J16FkHTyIq5hDIez7xUC/
|
||||
ehXkphKJb1TlkK7bPtHWH9zP0uXQatcFrJjBLR/Pi2y/0HNfCxLpVy8kXavAVK9molvXOOX1eNlD
|
||||
907XyYp7Vjq908yA7DWorJDPpuDXTqdIqbHK3MJRir7rbwkqYqVj/vnywdMkK5d1tAoH4uPBiqRQ
|
||||
pj4UD/WZ6PIDUv6+Jws0+fsD2aSwKl7Wbj4BT7nHipVPIm5CEIKd3O4kXOEHGkUbXEBd9Cs6ZQvR
|
||||
iLAhFYRL9EjWy9xv5i2kEtSN0Gg331b9vCjsBH3SvGTezXn1U3CDHB39VcqcaUvE9+CZGoT8rFBe
|
||||
H/OCmX3BQX3TnOHh0UfTPeoGOM+3cwJXW0MjfVEFFsplw+J2L+FpkpwQekv6/vDzxvya1Dlg4y3R
|
||||
94l7Kd0H/gL95E+mvjejV2NrFMSmuDLTk3dCOP62RN+tURI3YAHm6H0/odqyOvKjV8Uk9b0GxXxQ
|
||||
mcnWEf7pLwOMu7FnUedx0TvG0teEfTMoLcxUDGmb+DBAeibJSQRoSpZFBYumJ1TX3HcjXsN8g0Hu
|
||||
FGYP70vBeRXvoVaqFT12A8VcHIIKaUYUEEtSJ8G3p/gE65CnZPPgXcS2380TNs+nzS56PIpBq0GB
|
||||
ZadlJG6rBRZdnLUwa23MSstcivY7kRld1dMpWfNwJfi5qwaQ0q4lpi7VQszL9ROl665PZmmpoJmk
|
||||
RfmrZywW9RMNllkCwOmJkijOx0a40bwA85hyqsdSU9BP0rroEYo7lWYgQnjTVoH4gVki2tTEPL+A
|
||||
9o9/EW36wONHq3wAh6YkTA+rhj4ax9Xk7D6R+PGJU1rd66ee71JEpfiYFtx7gI3m9vxMhqdrisle
|
||||
XBMYuwgTx+B1//1oVag/aGGTaNeeGwr0u0VKZ+fEXaBX3z4v5gK5B3X/z57y+6aFH35ZtHvse3G+
|
||||
hC7qjsWOzvb927dZ+ghheXlxFvjurRGHqjnBs6YNSWQ9QDPwffWPH4lO8QMNLJ8q4AEcmHWJaM/f
|
||||
72qvzQ+Gidn3j2iMPFBgqNdrZjBUo2lY7Tsw7ppGsDs80ylAZED35COoOrIZ0ZukajBln20yPfk9
|
||||
nfUjv/2jR7/v5fg9K/DTHwlCCKe3Y3apod0N5a//w9w5ehZcw7JjTrwMkdhMo6RZwflLZdUq+jlu
|
||||
vzP8+BOqjIkaDcl2/UY7eJ7ZZnFqC55tiIRe1ZsQssRyyjRenfT3S7dZWJZFJEKcDbps2gY5UPpC
|
||||
n9cwl9Bb+ZGQ3jLT+SZxTd8YcKfDozawOAYN1w5FhogrrmHDDekjqd3tVBCMctRMvfy9gH4oP8yd
|
||||
jtdoMj5+i27Zd88i+eaKSTkd38DDvCVB6jvp6u6rEooze0e8zUXF4oO2HFbH75WE83nfi3UU3iBS
|
||||
6CUBSDRB59TQANZ8k6BQ52hw0uuMGklascjYh6n0frdbwDvTJ+FyXIpBsV34vc82qrEV47DGGjBr
|
||||
DAm2DBnz+FtfIN6+CoaXS0coqrzbIk6NmHlj5/QDep9PYA3LhLjwotHY4KDUdrrV037WXpHYfooT
|
||||
yknVMetj3yOBT/ENkqm9kDhGbSGeTn9CZAlnFgenRcQNiSkg4c0h+e6MFep///czL1iyomdMxUEL
|
||||
UYsWNeXZ+p2yVxBL6Pi6YLK7uGY6L43ah8lTDBbvt6uCw6h2ICvWg7nNkvWDI2chfLdWSdHKZxEl
|
||||
jPJf/fyZX6eoo4q9gee92VFZw3Y0vz8XFdZ7NiUzW6gF/fGDmtatGmIXloGUOTVU2D/sF4m7Nmpm
|
||||
svJtxFZ9T1fsMIppynoXAq0aaGXmI+bnrqWw3o8TcTT8xGO5fkh6cFp7xI6uFRqO+c5H97O+p1o9
|
||||
zw0LnRDA07UhocHcN2Ivr1uQXoVKheHXgkoQbEEay5zlr9MhHYRYHtCR3KVE42OXcmNevLVnlWls
|
||||
Q9WxmGtqqHDc9+7PvInwPHxGDbajgplDRRqxzTQqaDVv4kQYvoUEWr3+yce8/wEAAP//TJpbD7K6
|
||||
tobv56+Y+W7NjIhIy7rjJCBgiyAqycoKKCIgIocWaLL++w64s7MvMcYDpWM8zzsqFJdkcmInBf2N
|
||||
N7H+5txinOTJlx7vw568BRW0lOvOOWzvrkldR105sz9N0lx/ybJ/ty+W3+GwEjf4aHqvotdu6zNI
|
||||
6tceu5j0SbO7Z3eQvxuMxunhsjFPRH/hOawF3KcYA/E8QLGYCNYwlgIqZ1cIpfcVY/Tq4qJPyyCH
|
||||
9hDyVC+uFFCVrHzQ6HpIBK3Ltab6eCKgwWcim/cuAoO3G6GUJ9ydDJr/DIZVnFSQak6KTZ/gdiqv
|
||||
hzM4YXtN1WHsQJ/KlQgLM3rOfCqDsS7qCmJVOmMTNiAYyX6ngoWXnJDei251ay6AO/AbrMbxy5nm
|
||||
/wcTEx1Qlj1qh5lM1cFtOGv0SpMJsKc18NI+4yNqudy7GCwzVuGIwwN2CArBOG5WMYxKzsP7pkPa
|
||||
cOtOUML9l6Bt8tIY72qaAGc/JtLo9sl0oNEAQh9NhB3TwRmxonMQbndbbH/SF6jbdWUs/I7emWK2
|
||||
E4kEEVqwlameqBkb6kBD4E4HTKTmVbXMc2RVqtsM0XNcJsXPT1WFzxB4osyZxBXOf8+TvNOqYNge
|
||||
oxzGEKr4UWep1nPYq2DacQU1ynPKmJT1E2zvdULN0HGTb2kUEIZJ5VH8nIbk23pODgaiHhG3+1xa
|
||||
Bh+OL95dq6H2+8IFbJt5k+SDc0vyCALQ6UhV4XQF3o/Hyfm0RTDZhxyd1ydZ+ot4zEUdHzfxNvj1
|
||||
Jz8dAUUjr4PtcNkhOPM71YZ60Jp7rfEQTUZHdiHrHLov2wncd6VKGonUxeN2f7tQfQsRjQzTdKaj
|
||||
Llnws1UVenjbuTbBIc6h2ac3rKa142yW5wftnNfsYzcwvrhCh5ZJnlQhOAfslnQVNOOTifeCCoop
|
||||
9Tc5hFrUEB52gbZhapH99ncvHtt2XNuEB6pPrdl3Raef8wjR24wvqj20wWFve62LwpAfqS6fQ411
|
||||
jy8C78twX9avnRS4m0RyLRN8E7e8M7ZBpcLaTEOqL/cbwW8DoZY0aLI3ijOWRrEC2/3xQO3dRyv4
|
||||
GGTC7podCFX6USq6tKtW0JUzh/Bzf/v5bhNGJ7o3aAQmM+86GFQ7gmcfcIjm3GyYr4UQ39Z3OeGb
|
||||
6+UM00AeqdVTn03iC7pw1as9kUq90bpYFDxoMcqjc5Ge2cBFoQ9mnyFCBGs292sCQiteo4fM3k6Z
|
||||
xMSGT094YO0rHQJeOx5iIO9RhjXi35IXSogIoy6MqPrJy2TmhQGERUHQoPnrhKzvmzvchu2Dytae
|
||||
AQIOWgzc08aksn1UEioZDgH0nkPyNWTV6T3XakCgbjZUfQcnQCZ6Rzu0N2syUJAztl09zmCfmglZ
|
||||
UV/Ttsu1Z6P0V3/HXZ3egKvcGFXwXgVDz5ozPFmGTZFrFclvvyqC3GO120stW+rpxXQCBO6C1Y47
|
||||
V9PBfaiFOd86JaxeAR/E0fmx8DnrTi/FFsH0Ukk383U5+TcPbqSiIeIxzJIvf0McPG+zK+Kf47sd
|
||||
WX2IgbEuOwQuZwOMeTJ5wKiOCL9FrXSmmd/AhbOu+PLImUMu6g3CK4of2PLZnVFv/1yJK/m5wlh4
|
||||
MjaNRsKJlmUzaht7Gny571TD0TdM6qrKGkw65+XwZOk2dp1MS7hbF60A95AN7Lx3BthO67GC6MO9
|
||||
sXX35WIAJTFAtr81aPAEy2FOF+uLjyPh7svt0K4rHdYnz8J79UXBFINa/PGXDvUg+QZPsQMfW5Ww
|
||||
/nnvWKv0exma7YHihc9YkE4GXL8kjK1RgqCbuukCt/qwwQjJYcFAlE0SZ58OWIOG40z+QYYS3IIt
|
||||
kWr/7ExjuyMQ0lWElevrAjrjnl0AdEcLdfhTBwObkAG8w2DjMNhKztQdvmcoM92l8ewL9MobDexy
|
||||
sEO5cTiBzrduFwhOrkG94dA7pJ++BMz9j6LHOyzG6POSYSV0CvauYVkMURjHMMvM/a8fDH51FmCM
|
||||
VysU0Y3VbmZfhrMPos+T9my8snKA3/F6Q9yOgJb1+nCDAeUFtK5FxibQCTJ8ke0NjW9UFiMveKvl
|
||||
/iKR43DCE85fwahjPdYPCCXjlXWTqN+6is78n4xp3njwycELYgZaaw2JBlGSn/eKYnQvNEIVy4Dd
|
||||
qpIxMkPEJqpYOixWKKQ4adugO+p6ttQzAn2nccbSerhAK4Y9Ael3A75Py0IwjqaR2lu7Yr3xNQTY
|
||||
7GSeegJUkq6qo3zphwg0cgvGhFQqHPMVxe7Fxxp7H/Y86HFaY1PC74Lu7QMB+d0f0Ga8ntmXOu4A
|
||||
z7a+JrvbQQPbDV3L8PBhFMEbbQG9B9oZyl8vok4w7DVyOKY3oN71imqnd6gteS20NpVDnUncO1yP
|
||||
NwTWm7NC94dRBlvPseSf/1722g4MJ+meQu7AbWaesxMQmmIGzz0psQ74r5YjLECYTbyKUXhQik2h
|
||||
3BBYfs/Oams2XLSohI1zsBHXVUIyXERjBSdgvLDSj492CL7bG2Ru9MA4NL+gX/L6ud/jq3sKnEl9
|
||||
iy5YlWZBtuPYtPTte/HPf/kVeBesbBX06w+bx+MNRiuqG6j2UU76y/UeTC9jHEDYqSVW+HIErVAw
|
||||
H8LPsyB8sueDcR2MMZzzIZyeWxj0S//RimmP1VyKnaG4eARa7JSTKa1bbWQGUpdrxD5Z1w7h3b7B
|
||||
t/5k6FIIdsHbt9iFJTQUrO2+KCF8XbrwkHUiPa6SsGVn/abCtxgrJLNcAfzyVEVPLKq+9ibYPJ2G
|
||||
QDTpHVXbLQtq62Ny4H2Z7tTGxrFgCL4asV0l7c9n2ZzfL3kOlrPy6Ay8k3kwWhsXrJ71Q7G5RiqB
|
||||
e7FxyG6OAPpJnjwou/2Xap/P2E7Xb63DYG+nZPaLYIqGtwGxc7eprhaforsRxwULP5hVzLUlC8VB
|
||||
aPtCQ9LsT72txxfQG0VKtqZ3bofOGjppzpOQYCljQMR9JkgL77i+Y2vbg1Kn8OqaHNm6NtHIvD5w
|
||||
e9mP6KTcwoSdyz0HX+ePTz4SfrfDmSlQumYOIbR862yrDp8M7tZeRt3nvgi6t6LGYN2ke4rMkACu
|
||||
mMqZV8cH4qOjrm0MF1TQgqjDct8NQVOEng6fp/NI0bze49q3LGhTdUexLK3bkfuKjdjfOBP75Og4
|
||||
xH0+EVAeU0WtnM8X/oBw498gtteVCL7BcyJL/0LN/Q6c2Z9LaDubmpqa2hSTuDIz+BW9HLHXKdbG
|
||||
5f1XSiM0+9m8/p0NC3lakZVmlmB6nqsJzjyJvev92w5LvrXkN5pqihphlyKW2H07krW4aROC8LAC
|
||||
AD0sbK52PWPabjKgPyUvsvscdkm/8YEBh+4gYN1X8mJUg7aCtSl8sf7pVmyy+iyHoXK2qVPcG41u
|
||||
s9uwOzT6DSvoIxXz/rjBh3C+LPnEnLd6KiTXKsE28fp2mPN9YMY0wUeZ7Z3NtJZX4qfgB4pFUgV0
|
||||
4cmdsjkgMX5oLT2KQyyZQV3ROT9qR3oIst88w/K7LaPdjsugndon9BEKPmmEU9cAR3UdqmH8CHqm
|
||||
3wd4zAUd60zjABuYm4F5foH1y2Vs+8vIchhL8WXm62vAWi4bwMJP+y13Svixse5wrh9oO/N4K2dP
|
||||
COf6iK2Z35rdvU5hJ5MQIy5Qir4KbR/24ueK3ZfzBEyXziUsYutOl3xq7NMvBy+R4CGRA+eANW3q
|
||||
ijNPYG3rnpb5lyf6r3yHPqNAiy4UHh3YtENMf/tXyq/qMn8gTpx7rC/zrQ/DTi4RUKBfsKa9IJF/
|
||||
fUqqTatPMUhjYMFVV0Z4zoe06ZRZ+ZJXUyWpEzYA45VLfnr9kO3sj7/6pR5GCfGX4ZP0y3zNOsEK
|
||||
m1muBls/pDkon+s3lddZDhaegaPJyTjNTyXrwjiy4DXlRqrPvsU+186CjyT0Zz9I2vHFtToQHmdA
|
||||
DzuHdwbUBjV8QsXB7jwfmYLhZsAxentkCM93p599GihalmG3NBQwbqWOAK6USiJdrjCZoulyhpPY
|
||||
HDF+7sKk26gPAhNALtjm37CYjIe3AlPNBCS+LDsYh4tVQTBJl19+ya7c4wZsge/QQB+PdsmXRUq7
|
||||
I77Lnt/O/vnLS8n6qwiM7atQBsL2rWKcyXFbe9dVCWjFNHoI36rT24ZTLfkclT/Kt6Amsw2ow7OD
|
||||
r3N9+oosRr96i2hUa8OSb31WhU1RdfuC8eWtUoiHuMOo3lTFzGMdnD8fZfl9nZCFL2YeRLv1qy+Y
|
||||
rLsNlIQsQ8Lg0rZ3DsRY8n1Uim7GfrwPL9aH6s/DG4xbwFIw8zk+eqfMmYRN5MHX0bDm+XGfkOgc
|
||||
zTw1dguvaONhl97BgxgYG677BtOrXP3qN1Ybo3Km5pqeQXYJfGpEx9IZK4b4hd/p/VwbjHnuzofo
|
||||
WKXkdTNeSfbBWScpemTh6HOIkgHargrXXh1QF5qzsW6vMdycVjVV5vezTxTGAH4UTI+Xzaud88gY
|
||||
KI+zj+XFTyy+rED/7Sh162Rg78T3Owie1wHtOKpqw8IrJbO3s8/tGDt5wSC6tx2ken36BjRIJx3+
|
||||
WU4F/Pevv//+93LCoKof6Xs+GNCnY//P/x0V+Cd+xP9wHP8P5X8nEUgXZ+mff/3vIYQ/37auvv1/
|
||||
+rpMP92ff/3N/04b/OnrPn7/v5f/mr/rv3/9DwAAAP//AwCq5bGX4SAAAA==
|
||||
H4sIAAAAAAAAA1SaWw+ySrel779fsbJu6S8iIjVZdwjIWUpBETudDigiCCKHKqB29n/v4Luzu/vG
|
||||
CyBI6jDmM8as//jXX3/93aRldh/+/uevv6uiH/7+H8u1RzIkf//z1//8119//fXXf/x+/78nszrN
|
||||
Ho/ik/8e/90sPo9s+vufv/j/vvJ/H/rnr79terqSq58WKUse2wC9r2DSe42qdLrIJx6yAGx6Qncr
|
||||
pMfDLpJSbVNR97iuGBsu7RnQJa78dUDXLiOG7aF2+97j3cSmckJwMdCjQwI26P3ojkmCAthXQ4dd
|
||||
fmYhqz9bH/pKvfpM31wQq6r1CC7yc2pKiE8nt+QyUNcph/GJKO6bpI8M6CePqBF81W7a7DwVJeE1
|
||||
JGPqHNIpzo897LLzGh92s43GRzXkyJq/sS9i0rntF14ZrKxYoF7s3dOBIqVGyRVX1Lw8+HK+f8dY
|
||||
rrX45pe2bLkzVboZ7t9DQDXO+bDPZn+uUR/fZupfQAn5sVZVmGtHw/o5M9HMZbsEwqr6YNcLXuFY
|
||||
70Vuq7h24hdeG2nscJIlVIgN56+8ZmBje+w8UKukJOIqNkK2jC+o1sRRo4rVbuhxFMF24Ctsvqqb
|
||||
RmJdsOB72TKqZYfJnbSL2YIy8CX2hMZGTG1QC28xu2H9cM66+VOkI+r7GWGjiD7avFltY9S1xhVr
|
||||
VOA6cnGbO8jlziZStKIa69C5hhOcPXp6O7rGLC64w43jCFXk+oRYbtccxN0UUqvkzm4bNjRAPTIQ
|
||||
1v31iIZyI0Tw0DrFF0uO18Y30iPZNA8GthvOQYKnvRoY9tr8Zzymdxu3wITviPXt7cbY3b/wIBnm
|
||||
nZpF14WzldEzKjA6Eqk9sXI4HKpIYs5u9MnLUEI+QSPIGIsTvldyzib8QSpge91gF/GkI/vLWMDp
|
||||
8LpQ1WsFd0SfmEdz2l9wbOz5lAVEJzDgTKf2F2jIVu4VYGUlAt7vDbNjnFoLkPtx4gvLfPBm+NKh
|
||||
Qo5MBPnQdaOQdBYKNoFH2AN7ZU9aoQbJCkK6+3YHbbKDsw9F7PTYyh6uO/U4O4OVPw3suh5hM0JO
|
||||
DmfPVn1h1Rfh92RxmRTEnIfVXS64bRPMDayFp0vW/Kiyzflz7sE5VFd8UE7XbnySqJG6js1kEpOD
|
||||
O89qL8K2+xK8Q/cmJNLW1EEKyA57R2/f9c6l8IGX6YPuzWqtTW+F5lL59G54p5kS+wZJqK7Kr3uk
|
||||
iis+y5lspDe6FZudP5oXpE2WZIuge8/O30ofMZw4stZB7lcrAt8iR6zoc136KDue+v0xT9m1OwXo
|
||||
49wNvzlhPu0vXJ+guVUv+Le/57XxvcMx7iR/ErYlm9KvmcNh0nNiVoPBxh35GnDdJJRUerll4ydz
|
||||
fahvdxkr4r5BI5ZaB7bmycFHU2xQX72ZD0l4CfHjovshScc8loXSdP/MH2viywjrJFLJBvOPjupB
|
||||
m6N1e31gnD9y98vTV//bz37zfomIXvKeSFe/25EpuFnlbFw4H0Uu96CejPfhqEb3AkhDdIzvlzmc
|
||||
OC7zJS0/l9jzPlU3706IA+fwufqi8nbD8eO3BqgnLcFmT2/a+FsvV5Y/8KM65+UkR4UAn76JKTZW
|
||||
ESLye4rlbdqE+HF82WjaXzUdiOp86e5lWWXtzgcFBQXSqGsf92zaaUMMOVIT6un6y2VEaFRUy+sX
|
||||
1hjXp+ND//BIMDgV73JVSOfXJU+AHvTGnztD65b16kms77dUOxSWNj0fBSenx8zxeRm4lKq3JIcx
|
||||
FSV/nLJDyMza8uG4DiVfCL5qOUZHW4FNyos0ZscdY09FFlAcliLdCZHgtnjIHNRs55but+EnnNa1
|
||||
cNw+L+2AzU2muoIncRYYk1X4cFAhZNTwJbTZz2fsCgTSTx4ECVT626TJKGKXidHWAeypT2wd5sIl
|
||||
/OF2h0gKRSLtfMamLKU59AbO/XnjW9p8SJkITtgB6WZ3g0Zu9w5QWrR36h29qhuPAWToVtOQ6oeN
|
||||
F34L5SXAc2IC4Qm+IZLrLg95TxJqB22L5uvD6WGcs+i3P9x+t85muIaKS23yZGyi88WBGb8/1GZV
|
||||
rTEmFQEcrGgmtdiaWn9631R0V7uNLwy56dY537bA78yM4u4ZaHNXH88oLZo79teRHU5MxgnqBu6L
|
||||
7Wym6azrKQf87bylFiauNtaXRAe04gNqll/KOiXDuvTgW5sMfRQwEjw5C6y5i/FhvbI7djm5bygI
|
||||
PhJxw+qQ+WWQ/fSPursydqejcohAfdgmedgSCWf6+t4Rerg2Vj7BxCbrJUeANC/E3pm2LmW3/g3U
|
||||
VHT6OD1JSFZ7EKA4Bwk2jiqnzVpvN4At60DvynOn9aKLCfrVf7h+N+X0jZQGWFJ88F63Sm1+VXaO
|
||||
xjpq/FGdhY4xx72DfPEV6itu5ZJa7EVIxUE50HQeyomGgfSrL4R577KjAttwqLmaL7KxL0dtstp8
|
||||
hp296n2YTIXN9OhJYHn6y0ei9NJoIIwWiHIU4kNSbcJhfq8daTTLCWsrywsJf5/fsiWlQECD0J2i
|
||||
S2+hebPJ/O/82ZVT2Tx8OPpUww65vdLOO42+7AjXPbbzMC5/6xMVxTvBfiS9UfM1ZwNx+z7Ah9Sq
|
||||
XJqJXgPDpfhSPy+DbkzcxEHPQ3oiW95qUcehyQC+PI1UN5ysHDVFS0DvggJ70sFG86qecxi5WsO2
|
||||
PL060q2nO/Cn94V6Tkm6eXpRZ7voBbbt9wsRKfEEeHRbgS7j7c5kM9dA5VrCiiG9w3GucY9uwDaE
|
||||
0/Zz17+9hoM8vp78Wbs/tTnmlOSPHmFjF6TTvJ8l0GLT9OelfmVqGRdwfx5TbHY+Ziy7Ghw03b2l
|
||||
Fj+57qIPvHTIP1/CQZx208u8CTCenYhI+0lMycexC/Sd85iq2q5Jx5NEeeRkho+1427NyEfKI9kt
|
||||
5D1VgmfqTtL528srA6+wu7qRrgY+yIFF7QW7rbvTxq94lOStFoskF/NV2e+34VlyYYuw83Kdcmqr
|
||||
z31zV7g79vGIylmabzEID6Wm9sq5o8l9WCN6dsOJWlptlOPF39fQQfDFXo32IV/vRUA763jCjnAT
|
||||
tZE+jiKs28sD+9o+6NiNdxJo5ezqT0u97Ye3IsEpUzx/vcFjSn96ybW6QF3aO5ogbKwjgHCzsbJ8
|
||||
L/UaU4E+Ile6v0zHkjjfUIIt2zrY0K48Y/UmOAP3WqfU+cx7xh/El4L8g3egh+m+d8m+NyNophLj
|
||||
gyATlzqb9i7Fb5WSQb9V6ehmaYS6Ab5U1e5Pd+yLQwbDCiKsvp9NOqoGStDHP8fUVFoZjVaJJbg1
|
||||
ouPTl4DS7tA6GaTcbFPPel5Z/747BkKmUZEtR2uNhoLMo7m2NJym9502epbqATS1Qg8V3aRTbzUt
|
||||
2FX0olZb0Y5+k5sP8Ul4EAgs0lF09HnYpzOPvbV57ZqJOx+h7jpMJuOku2MZxgDC1qX+lj7EdAhL
|
||||
z5FycVNiD38VxM94FP/wtLFt3HLefpsj0rdmR1a2NrBJfXcGANe8ySt7DdrkRfH82294Z5ze2nA6
|
||||
vURZsycTu9P4cvsdeeno4shnIjvizIYhl0SQ5UzwP9a1K//o3U1xRbJ2h4LRk3E7w3PUUxrfb+eQ
|
||||
/Pi2qhnvo7xow6lus0gKsteWKlY6dOMrUQAMERnU8e9OyUJBFn77m1rCcHbpUX9IyN2cfV/AutpN
|
||||
ZlOdUZmjNd2dxShlq7OboPeVM/GvnjEI1EDeaNWD9HILLiUDX0O5bUy6LxPOnbzoOMtLfSHsbGul
|
||||
8NOTXd+usfU2Co00DvaQ7U97rCjC0LXV7hgjdxP5/lQYB43NuyT68RzW7fOHMRfzIzzwPGDHOQPr
|
||||
9+NHhC88faw6WlLS7zbMII12G7p3PmM3rGsuQH3UX4nAm0XZjFEuoe8FMcI+YeKOW/kFcnE+JmQU
|
||||
82fJXCVtYdFfbO0KjJj3up1R1ksrqjW3vqMlGBIM9PFceGjnTpugqX9+DuNBQeGyXji01Cfqye1d
|
||||
oxaXZGgVEQG76f3lzrfNMf7Dg+XJbFzWluoZrZOzSu8rMqNp5x0F+SNxMfV2j6ocx9pRgRtka+H3
|
||||
yJ2uA5eAZXkYH6TC10YYdyDPj9voz6qmMSFUmAjRoRPJpB+GdNoUdo9+vLvZj6M7i583D3H4ErFf
|
||||
fd+ovQSRA0HNLv5bfpndKPKiAMnzqVAvi3M2KV1ooKW+Ln6yTtnGV1TZyXSfZhCni56dEnD9OvdX
|
||||
lzB3J1vCNUzbQqPW6NflXGS3AvihV/HFNrJw2AR5DXvBK6iW9xljVSXP8DyNKdVvOy9tf9+/f6tH
|
||||
epAK4nblOS3Q4Vj7/iqwIzQFX3Ck5mC11GQRH052Oc4yf7l/yLC9I9Rv9ECFxW+Q6V0liDSO6UNh
|
||||
ftdUaRhyR8ubDImaqo4PON2E804NBDhENqJ+9dXR5pcfLPxOVZFOYfs4lQJEkFHC3+s+JQsfoU6M
|
||||
r6SgLym8FXTvQVXpN3ohtpnOu9dDBzS0yjLzL8Z2uVNAVRm3P35t8/N/n/b5wvbFjrvJVpkOxyp4
|
||||
0v23LhAjA9Rgo9se+5OPyrEv9tnP/xOoklDb+DN7o5/frUnWdeNhlRGk+hsTa/NDcom9ywupu1U5
|
||||
NdhudJf7vXTKVI8aDpzZ6K5u6o//6KO0UrT4L0laxh8Hi5+doVhzwK+kM1XUOeomnn4JeOb166PW
|
||||
3bnzrWICEpOvQ9W508rN3X/w4uIPqeXWckjTd83BzXk7ZOsXD/eP382OU0B1a0pcduKh/+MnvTN1
|
||||
XBoVjQOTMkY4C65KukaucITLKx/pIXgEbJQucPzlBWQ7li0jJBbPsOQ//llML+EUzlWA5E0ckDH6
|
||||
NGxWtt8WsfMz868rrXBrT+IcuN3H/+Jd4VXZBTroWY73HH8Mn9nV52D3viUUr87vdHpvtyOS4pD4
|
||||
qxSvUM+d9vGf9eCkBet67GgRMsStQffiRUWkkNMZhTuskUZa7xAxsFWgqp54ukufp4766n3engP8
|
||||
IetiV7BJFw5nNLs4I3yINI2fzcMdhXaWUXc6hOkInyxH9iZnf/IoVupSDDEEDtWsVdm1sS44kDn9
|
||||
gK1olFN2aNU75HV38ufTaKWMOdodCXIuYrVIT+kIq85Ap9x7YrcxnZAs47FVT7uEDBedpDXcrSOY
|
||||
/qolqxPJ3U58EP43Xj7q7Kpb+DxDt2ND/U3AGYjNu+CMusvg49cpfbvjwm+//YCP9m12F54DOJ+T
|
||||
DLv35F4u9SiStELjsLP4J3a+ordkWAlPF54q2yoPGqjBMSiubys0rcaxgFS1HKzVopaulXTL/fw+
|
||||
Nr+BgQR1nlqQmrHCPlsr2vzZZQZa1ovP7ifLHd93R4ep8zUfslhB8+se6fAqR2vRc4p++Rlw9vu9
|
||||
6IvvtovfQvVnlvHuXUmsvahrBRY9XerLHY0LT0Nn3nzs0Bk6WupzAqu1Iiy8eGF/9AfsycYWRK47
|
||||
Pj85yFlcCmR7rM/uxIQbgXpUb9h3D1d3SIORoNTbK/6rGZpyzr6cgZJL7uBY9OWU3d7fI4T7t0fT
|
||||
2d2wvsiFFgo+XPsvkE8daQcrgi/X7Glgvns0yKZN0CcOMf19zxgddwps7fsOPyvlXbI350TgDUj/
|
||||
Uw/mWnyLELXBwz+Kewut/Si6gyX2L5/EKdGYZ+sjuPVm2RYx6ua3pcSANvXGl54mK1kvxwoYinvz
|
||||
xx/P1/uRA5LEur+FDqfrm1hwsMq7AR9syU8n4Tg40hbnFcV99e7Y9tkeYdzzkb+99bLW0MdRksVC
|
||||
rKl+zUuNNqfGAIQ4BbvV3mfsXYk6LPkOtb6fTus9oXpL52PhETg5rTvla9lDAnurZFryh847iT4c
|
||||
N9lEbVWuQ8qpNQ8//l3yA0TM67ZFy31fsJUOzb/6d1Mjih13jTXmPi8S+vG3Q68VG4RmS9Dir/2J
|
||||
epHWaVM/wo/35+OsoY3X4GW9pKPPFj7o07N2hCoab/SQWnttqNvsjET9XVN90asZCpmDiCcuPWTD
|
||||
3uWNU9UCwmeFet9CQZu5iBXYfiHE53K/RaMzQAZkbpCP7rKTIr9M7sDZ9RurQzWF+VeMxV/+hG3J
|
||||
25Uboxf9P/VhCm5NOd+M7RsWvfKlIxJTFs8GB+Ywv7CZcI9u+jDzDuNLfmLlGzWI3IxtjYxjc8T3
|
||||
fhW64zWWLETu5pNsE9yiYfG76KvRF4FnVIVsEl8OyAkbfOne1O407MQWbjBtSGcY93K+pLs3WunF
|
||||
Gy/5BWoW/YTcwyWZzqIQzo12KmD76kWcGgaU1OrXEvSBsF/qWeJOrB5rYOdHRsTJ67R5jDkV6t2h
|
||||
8NkD9x17PKUY6lN/879j6TBeH1sPNjtJxf5d8d2e272P8Muf9MK8pGz6WipwuaOQ5nYV/+Q1wD07
|
||||
i+qxZiLh+nAIfFSvW3iY17qib3SUCOqdYls7MLY6a4mklrjHGNF3OMfrMwevVFtj9fXwEPNOo/fr
|
||||
P2B/Z9ul8ERBCwu/ECi3NlvyUg+s5tDS3TaeOtacGv2nl2TxF+G8P+wNWPTgT57VL3k++umHcjOE
|
||||
rp5e1BK9Xbj3Z/ZtXLrwMVryHSI4cEZTcst7+fVpE5+/rUbWj1EuygMYOnbZydGEmLMS+EDHCNc5
|
||||
AxvS4sSDQl+cf1a+l44V7DLCwcM66YSw6v7w8ZJXEOqaern+rZcjez+pXrSlRkSrqFG7rffU4R/U
|
||||
Xd+ltwGZcXn700HQtc2DpgWU+txjZcfPy3jmOhwdfqJL/hayOBUtYFmypUrhrbrpQDpVWvpFOE5E
|
||||
1yVLvoB40/lQK4bCnTY7XYHh0yPsfnYb1Np5McN9FQp+/uO7IK3ev7yeOi+3LScmmzGopl76bH4k
|
||||
2hxOAQFNCm8+dzgRdzpwngOPMVQIhy9vNOofYYbKaTqc7exvN7/umQ4SL4VYi20ppMv8y0zoRrJ5
|
||||
Gl3am+HLQPJ0sTFe/N4YB6rz4zuy6enWHbQHMuCm2CI2zGfxX/6iw80Xq29ppf2pd/c0tql7l9uQ
|
||||
vK/ffPv6NAk2MS+X7GSczkD6d4T1rFinbH0eVXi0fopx+xq6cctzBZKtT4oP693e3eTqFEjnuh6p
|
||||
u+8qRge+r8HwToa/DgMVkdvmmMhrrq/pz++y+cjucFSriLreIGp0/OhvgD4L/KK8rVEHk1egcPY8
|
||||
at/6h0Z//LboPd4n57XLxo/3Rp9zImMltmaXvlKtANLX0cLX13A6qcqI/vC0Xpw6wbLiHBb98Cf/
|
||||
tChtvAJY5/EH74vVJ2wjXcyAP9UXfMCKUvZm6fjgpOiKcYqfCx/zb4hceNBfP24667YIVSCavuTF
|
||||
IRuPE9dL7zow8a7UT+VI+1mRutEB/7OVKaOdI4/otYaEKg//HTJ7/VF//UCyz9GxJOr2E4GmKqUv
|
||||
b1BQTo9ASKRsXr2pfs4+bBzXpQVLPw/v/FPC5v1FLH5+i3pzftfm9/VVyD//L14F3C39Q/Lrl/iS
|
||||
ca7dHsmfFrZgVdhkW1VbC5QmKGrMmu7doUA/HgDZEBX8WO8qjW4eXwu8QzxTHPVpOIWlZ0F02Z4X
|
||||
v5mmI/9KLfSZe/lPHs6CrHwDyysPL/rlstSPHejVi0/GjZygXt0kPsJi88R2HO7QwuMExbJcEbTk
|
||||
4dP3Lpxh8Qt4zx8ubq89D+2f+XTvCZRTtj8SdDlpWx8tedIk7a0aquNwJNLTDLvp7hyO6MMJrS/o
|
||||
YubOUYoDacmLcEaHIJ1H+yyB4JWUiLeryKYnVB46i5P6px41d8Q16Oy5KtVSoqU01roaefvWp/7O
|
||||
/pa//AcuB8vF6Vasuo5zkwA6Mbli7WQ22vzLtzZEc6hHtS+a/CjLoKizHluVXpezXt0aCOWN6r/4
|
||||
YJXSjW8pMAXQ+KvZGErWWNBCej7f/e30oikRZJmDKWeTPyROXpLBTGc0g/f57S80XpswQDendjCu
|
||||
b0/EcLg9Q75xLLKqYeiG4XQbf/1IAobSaot/j5FBOIzdc1KhaeazAC39M/x73/QIuBhlRxbQ3cLX
|
||||
rCZEQtfKe9BrUhqMXbtbADJ1BFJn08d9pcHYy4p5snCEn7d0kljHgcSLIdWK8VIO28Jc+jFGQ93n
|
||||
7uX+8kX0iU+Y2pL36n7vQ9DfA7ysP8YET69R/FYotU6YD+tbFPQgrjTib1aauvSb1xKs1qpAdYK3
|
||||
jFkH1kiWdAPqe+G37B/yrMPfv1MB//mvv/76X78TBnXzyKrlYMCQTcO///uowL+TR/Jvnhf+TYU/
|
||||
JxFIn+TZ3//81yGEv79dU3+H/z007+zT//3PX8Kf0wZ/D82QVP/P5X8t//Wf//o/AAAA//8DAL+P
|
||||
H9bhIAAA
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 7b50c215fcf91678-DME
|
||||
- 7b3ba1a41cc19235-FRA
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
@@ -560,7 +551,7 @@ interactions:
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sun, 09 Apr 2023 06:40:21 GMT
|
||||
- Thu, 06 Apr 2023 17:08:12 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
@@ -572,7 +563,7 @@ interactions:
|
||||
openai-organization:
|
||||
- own-45h3iv
|
||||
openai-processing-ms:
|
||||
- '14'
|
||||
- '737'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
@@ -584,7 +575,7 @@ interactions:
|
||||
x-ratelimit-reset-requests:
|
||||
- 20ms
|
||||
x-request-id:
|
||||
- dfb9f22839291be2e6a38403a0861937
|
||||
- f2f02f88b1097ce728e4b79198958fde
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -2,22 +2,26 @@ version: "3"
|
||||
|
||||
services:
|
||||
elasticsearch:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:8.7.0 # https://www.docker.elastic.co/r/elasticsearch/elasticsearch
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:8.7.0
|
||||
environment:
|
||||
- discovery.type=single-node
|
||||
- xpack.security.enabled=false # security has been disabled, so no login or password is required.
|
||||
- xpack.security.enabled=false
|
||||
- xpack.security.http.ssl.enabled=false
|
||||
- ELASTIC_PASSWORD=password
|
||||
ports:
|
||||
- "9200:9200"
|
||||
healthcheck:
|
||||
test: [ "CMD-SHELL", "curl --silent --fail http://localhost:9200/_cluster/health || exit 1" ]
|
||||
interval: 10s
|
||||
retries: 60
|
||||
interval: 1s
|
||||
retries: 360
|
||||
|
||||
kibana:
|
||||
image: docker.elastic.co/kibana/kibana:8.7.0
|
||||
environment:
|
||||
- ELASTICSEARCH_URL=http://elasticsearch:9200
|
||||
- ELASTICSEARCH_USERNAME=kibana_system
|
||||
- ELASTICSEARCH_PASSWORD=password
|
||||
- KIBANA_PASSWORD=password
|
||||
ports:
|
||||
- "5601:5601"
|
||||
healthcheck:
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
"""Test Chroma functionality."""
|
||||
import pytest
|
||||
|
||||
from langchain.docstore.document import Document
|
||||
from langchain.vectorstores import Chroma
|
||||
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
|
||||
@@ -16,17 +14,6 @@ def test_chroma() -> None:
|
||||
assert output == [Document(page_content="foo")]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_chroma_async() -> None:
|
||||
"""Test end to end construction and search."""
|
||||
texts = ["foo", "bar", "baz"]
|
||||
docsearch = Chroma.from_texts(
|
||||
collection_name="test_collection", texts=texts, embedding=FakeEmbeddings()
|
||||
)
|
||||
output = await docsearch.asimilarity_search("foo", k=1)
|
||||
assert output == [Document(page_content="foo")]
|
||||
|
||||
|
||||
def test_chroma_with_metadatas() -> None:
|
||||
"""Test end to end construction and search."""
|
||||
texts = ["foo", "bar", "baz"]
|
||||
|
||||
@@ -8,7 +8,9 @@ import pytest
|
||||
from elasticsearch import Elasticsearch
|
||||
|
||||
from langchain.docstore.document import Document
|
||||
from langchain.document_loaders import TextLoader
|
||||
from langchain.embeddings import OpenAIEmbeddings
|
||||
from langchain.text_splitter import CharacterTextSplitter
|
||||
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
|
||||
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
|
||||
|
||||
@@ -43,6 +45,16 @@ class TestElasticsearch:
|
||||
|
||||
yield openai_api_key
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def documents(self) -> Generator[List[Document], None, None]:
|
||||
"""Return a generator that yields a list of documents."""
|
||||
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
|
||||
|
||||
documents = TextLoader(
|
||||
os.path.join(os.path.dirname(__file__), "fixtures", "sharks.txt")
|
||||
).load()
|
||||
yield text_splitter.split_documents(documents)
|
||||
|
||||
def test_similarity_search_without_metadata(self, elasticsearch_url: str) -> None:
|
||||
"""Test end to end construction and search without metadata."""
|
||||
texts = ["foo", "bar", "baz"]
|
||||
|
||||
@@ -46,9 +46,7 @@ DEF_EXPECTED_RESULT = TestModel(
|
||||
def test_pydantic_output_parser() -> None:
|
||||
"""Test PydanticOutputParser."""
|
||||
|
||||
pydantic_parser: PydanticOutputParser[TestModel] = PydanticOutputParser(
|
||||
pydantic_object=TestModel
|
||||
)
|
||||
pydantic_parser = PydanticOutputParser(pydantic_object=TestModel)
|
||||
|
||||
result = pydantic_parser.parse(DEF_RESULT)
|
||||
print("parse_result:", result)
|
||||
@@ -58,9 +56,7 @@ def test_pydantic_output_parser() -> None:
|
||||
def test_pydantic_output_parser_fail() -> None:
|
||||
"""Test PydanticOutputParser where completion result fails schema validation."""
|
||||
|
||||
pydantic_parser: PydanticOutputParser[TestModel] = PydanticOutputParser(
|
||||
pydantic_object=TestModel
|
||||
)
|
||||
pydantic_parser = PydanticOutputParser(pydantic_object=TestModel)
|
||||
|
||||
try:
|
||||
pydantic_parser.parse(DEF_RESULT_FAIL)
|
||||
|
||||
@@ -22,17 +22,6 @@ multiply()
|
||||
```
|
||||
"""
|
||||
|
||||
_AST_SAMPLE_CODE_EXECUTE = """
|
||||
```
|
||||
def multiply(a, b):
|
||||
return(5*6)
|
||||
a = 5
|
||||
b = 6
|
||||
|
||||
multiply(a, b)
|
||||
```
|
||||
"""
|
||||
|
||||
|
||||
def test_python_repl() -> None:
|
||||
"""Test functionality when globals/locals are not provided."""
|
||||
@@ -91,16 +80,6 @@ def test_python_ast_repl_multiline() -> None:
|
||||
assert output == 30
|
||||
|
||||
|
||||
def test_python_ast_repl_multi_statement() -> None:
|
||||
"""Test correct functionality for ChatGPT multi statement commands."""
|
||||
if sys.version_info < (3, 9):
|
||||
pytest.skip("Python 3.9+ is required for this test")
|
||||
tool = PythonAstREPLTool()
|
||||
output = tool.run(_AST_SAMPLE_CODE_EXECUTE)
|
||||
print(output)
|
||||
assert output == 30
|
||||
|
||||
|
||||
def test_function() -> None:
|
||||
"""Test correct functionality."""
|
||||
chain = PythonREPL()
|
||||
|
||||
Reference in New Issue
Block a user