Compare commits

..

4 Commits

Author SHA1 Message Date
Bagatur
f2e51266cb fmt 2023-07-24 16:39:36 -07:00
Bagatur
e656e8cb8b merge 2023-07-24 16:38:57 -07:00
Harrison Chase
7c3ce368d7 Merge branch 'master' into harrison/async-web 2023-07-20 15:25:14 -07:00
Harrison Chase
56a321ba81 stash 2023-07-20 09:32:19 -07:00
178 changed files with 962 additions and 3130 deletions

View File

@@ -19,7 +19,7 @@
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
**Production Support:** As you move your LangChains into production, we'd love to offer more comprehensive support.
Please fill out [this form](https://6w1pwbss0py.typeform.com/to/rrbrdTH2) and we'll set up a dedicated support Slack channel.
Please fill out [this form](https://forms.gle/57d8AmXBYp8PP8tZA) and we'll set up a dedicated support Slack channel.
## 🚨Breaking Changes for select chains (SQLDatabase) on 7/28

View File

@@ -11,14 +11,14 @@ Language models can be unpredictable. This makes it challenging to ship reliable
LangChain exposes different types of evaluators for common types of evaluation. Each type has off-the-shelf implementations you can use to get started, as well as an
extensible API so you can create your own or contribute improvements for everyone to use. The following sections have example notebooks for you to get started.
- [String Evaluators](/docs/guides/evaluation/string/): Evaluate the predicted string for a given input, usually against a reference string
- [Trajectory Evaluators](/docs/guides/evaluation/trajectory/): Evaluate the whole trajectory of agent actions
- [Comparison Evaluators](/docs/guides/evaluation/comparison/): Compare predictions from two runs on a common input
- [String Evaluators](/docs/modules/evaluation/string/): Evaluate the predicted string for a given input, usually against a reference string
- [Trajectory Evaluators](/docs/modules/evaluation/trajectory/): Evaluate the whole trajectory of agent actions
- [Comparison Evaluators](/docs/modules/evaluation/comparison/): Compare predictions from two runs on a common input
This section also provides some additional examples of how you could use these evaluators for different scenarios or apply to different chain implementations in the LangChain library. Some examples include:
- [Preference Scoring Chain Outputs](/docs/guides/evaluation/examples/comparisons): An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scores
- [Preference Scoring Chain Outputs](/docs/modules/evaluation/examples/comparisons): An example using a comparison evaluator on different models or prompts to select statistically significant differences in aggregate preference scores
## Reference Docs

View File

@@ -8,7 +8,7 @@ Head to [Integrations](/docs/integrations/llms/) for documentation on built-in i
:::
Large Language Models (LLMs) are a core component of LangChain.
LangChain does not serve its own LLMs, but rather provides a standard interface for interacting with many different LLMs.
LangChain does not serve it's own LLMs, but rather provides a standard interface for interacting with many different LLMs.
## Get started

View File

@@ -31,7 +31,7 @@ There isn't any special setup for it.
## LLM
See a [usage example](/docs/integrations/llms/INCLUDE_REAL_NAME).
See a [usage example](/docs/modules/model_io/models/llms/integrations/INCLUDE_REAL_NAME.html).
```python
from langchain.llms import integration_class_REPLACE_ME
@@ -40,7 +40,7 @@ from langchain.llms import integration_class_REPLACE_ME
## Text Embedding Models
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME)
See a [usage example](/docs/modules/data_connection/text_embedding/integrations/INCLUDE_REAL_NAME.html)
```python
from langchain.embeddings import integration_class_REPLACE_ME
@@ -49,7 +49,7 @@ from langchain.embeddings import integration_class_REPLACE_ME
## Chat Models
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)
See a [usage example](/docs/modules/model_io/models/chat/integrations/INCLUDE_REAL_NAME.html)
```python
from langchain.chat_models import integration_class_REPLACE_ME
@@ -57,7 +57,7 @@ from langchain.chat_models import integration_class_REPLACE_ME
## Document Loader
See a [usage example](/docs/integrations/document_loaders/INCLUDE_REAL_NAME).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/INCLUDE_REAL_NAME.html).
```python
from langchain.document_loaders import integration_class_REPLACE_ME

View File

@@ -29,7 +29,7 @@
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"labeled_pairwise_string\")"
"evaluator = load_evaluator(\"pairwise_string\", requires_reference=True)"
]
},
{
@@ -43,7 +43,7 @@
{
"data": {
"text/plain": [
"{'reasoning': 'Response A is incorrect as it states there are three dogs in the park, which contradicts the reference answer of four. Response B, on the other hand, is accurate as it matches the reference answer. Although Response B is not as detailed or elaborate as Response A, it is more important that the response is accurate. \\n\\nFinal Decision: [[B]]\\n',\n",
"{'reasoning': 'Response A provides an incorrect answer by stating there are three dogs in the park, while the reference answer indicates there are four. Response B, on the other hand, provides the correct answer, matching the reference answer. Although Response B is less detailed, it is accurate and directly answers the question. \\n\\nTherefore, the better response is [[B]].\\n',\n",
" 'value': 'B',\n",
" 'score': 0}"
]
@@ -90,7 +90,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "7f56c76e-a39b-4509-8b8a-8a2afe6c3da1",
"metadata": {
"tags": []
@@ -104,7 +104,7 @@
" 'score': 0}"
]
},
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -129,7 +129,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "de84a958-1330-482b-b950-68bcf23f9e35",
"metadata": {},
"outputs": [],
@@ -138,12 +138,12 @@
"\n",
"llm = ChatAnthropic(temperature=0)\n",
"\n",
"evaluator = load_evaluator(\"labeled_pairwise_string\", llm=llm)"
"evaluator = load_evaluator(\"pairwise_string\", llm=llm, requires_reference=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "e162153f-d50a-4a7c-a033-019dabbc954c",
"metadata": {
"tags": []
@@ -152,12 +152,12 @@
{
"data": {
"text/plain": [
"{'reasoning': 'Here is my assessment:\\n\\nResponse B is better because it directly answers the question by stating the number \"4\", which matches the ground truth reference answer. Response A provides an incorrect number of dogs, stating there are three dogs when the reference says there are four. \\n\\nResponse B is more helpful, relevant, accurate and provides the right level of detail by simply stating the number that was asked for. Response A provides an inaccurate number, so is less helpful and accurate.\\n\\nIn summary, Response B better followed the instructions and answered the question correctly per the reference answer.\\n\\n[[B]]',\n",
"{'reasoning': 'Response A provides a specific number but is inaccurate based on the reference answer. Response B provides the correct number but lacks detail or explanation. Overall, Response B is more helpful and accurate in directly answering the question, despite lacking depth or creativity.\\n\\n[[B]]\\n',\n",
" 'value': 'B',\n",
" 'score': 0}"
]
},
"execution_count": 6,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -185,7 +185,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 12,
"id": "fb817efa-3a4d-439d-af8c-773b89d97ec9",
"metadata": {
"tags": []
@@ -210,13 +210,13 @@
"\"\"\"\n",
")\n",
"evaluator = load_evaluator(\n",
" \"labeled_pairwise_string\", prompt=prompt_template\n",
" \"pairwise_string\", prompt=prompt_template, requires_reference=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 13,
"id": "d40aa4f0-cfd5-4cb4-83c8-8d2300a04c2f",
"metadata": {
"tags": []
@@ -237,7 +237,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 14,
"id": "9467bb42-7a31-4071-8f66-9ed2c6f06dcd",
"metadata": {
"tags": []
@@ -246,12 +246,12 @@
{
"data": {
"text/plain": [
"{'reasoning': 'Option A is more similar to the reference label because it mentions the same dog\\'s name, \"fido\". Option B mentions a different name, \"spot\". Therefore, A is more similar to the reference label. \\n',\n",
"{'reasoning': \"Option A is most similar to the reference label. Both the reference label and option A state that the dog's name is Fido. Option B, on the other hand, gives a different name for the dog. Therefore, option A is the most similar to the reference label. \\n\",\n",
" 'value': 'A',\n",
" 'score': 1}"
]
},
"execution_count": 9,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}

View File

@@ -30,12 +30,7 @@
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"criteria\", criteria=\"conciseness\")\n",
"\n",
"# This is equivalent to loading using the enum\n",
"from langchain.evaluation import EvaluatorType\n",
"\n",
"evaluator = load_evaluator(EvaluatorType.CRITERIA, criteria=\"conciseness\")"
"evaluator = load_evaluator(\"criteria\", criteria=\"conciseness\")"
]
},
{
@@ -50,7 +45,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'The criterion is conciseness, which means the submission should be brief and to the point. \\n\\nLooking at the submission, the answer to the question \"What\\'s 2+2?\" is indeed \"four\". However, the respondent has added extra information, stating \"That\\'s an elementary question.\" This statement does not contribute to answering the question and therefore makes the response less concise.\\n\\nTherefore, the submission does not meet the criterion of conciseness.\\n\\nN', 'value': 'N', 'score': 0}\n"
"{'reasoning': 'The criterion is conciseness. This means the submission should be brief and to the point. \\n\\nLooking at the submission, the answer to the task is included, but there is additional commentary that is not necessary to answer the question. The phrase \"That\\'s an elementary question\" and \"The answer you\\'re looking for is\" could be removed and the answer would still be clear and correct. \\n\\nTherefore, the submission is not concise and does not meet the criterion. \\n\\nN', 'value': 'N', 'score': 0}\n"
]
}
],
@@ -64,45 +59,7 @@
},
{
"cell_type": "markdown",
"id": "c40b1ac7-8f95-48ed-89a2-623bcc746461",
"metadata": {},
"source": [
"## Using Reference Labels\n",
"\n",
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialuse the `labeled_criteria` evaluator and call the evaluator with a `reference` string."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "20d8a86b-beba-42ce-b82c-d9e5ebc13686",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"With ground truth: 1\n"
]
}
],
"source": [
"evaluator = load_evaluator(\"labeled_criteria\", criteria=\"correctness\")\n",
"\n",
"# We can even override the model's learned knowledge using ground truth labels\n",
"eval_result = evaluator.evaluate_strings(\n",
" input=\"What is the capital of the US?\",\n",
" prediction=\"Topeka, KS\",\n",
" reference=\"The capital of the US is Topeka, KS, where it permanently moved from Washington D.C. on May 16, 2023\",\n",
")\n",
"print(f'With ground truth: {eval_result[\"score\"]}')"
]
},
{
"cell_type": "markdown",
"id": "e05b5748-d373-4ff8-85d9-21da4641e84c",
"id": "43397a9f-ccca-4f91-b0e1-df0cada2efb1",
"metadata": {},
"source": [
"**Default Criteria**\n",
@@ -113,36 +70,77 @@
},
{
"cell_type": "code",
"execution_count": 4,
"id": "47de7359-db3e-4cad-bcfa-4fe834dea893",
"metadata": {},
"execution_count": 3,
"id": "8c4ec9dd-6557-4f23-8480-c822eb6ec552",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[<Criteria.CONCISENESS: 'conciseness'>,\n",
" <Criteria.RELEVANCE: 'relevance'>,\n",
" <Criteria.CORRECTNESS: 'correctness'>,\n",
" <Criteria.COHERENCE: 'coherence'>,\n",
" <Criteria.HARMFULNESS: 'harmfulness'>,\n",
" <Criteria.MALICIOUSNESS: 'maliciousness'>,\n",
" <Criteria.HELPFULNESS: 'helpfulness'>,\n",
" <Criteria.CONTROVERSIALITY: 'controversiality'>,\n",
" <Criteria.MISOGYNY: 'misogyny'>,\n",
" <Criteria.CRIMINALITY: 'criminality'>,\n",
" <Criteria.INSENSITIVITY: 'insensitivity'>]"
"['conciseness',\n",
" 'relevance',\n",
" 'correctness',\n",
" 'coherence',\n",
" 'harmfulness',\n",
" 'maliciousness',\n",
" 'helpfulness',\n",
" 'controversiality',\n",
" 'mysogyny',\n",
" 'criminality',\n",
" 'insensitive']"
]
},
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.evaluation import Criteria\n",
"from langchain.evaluation import CriteriaEvalChain\n",
"\n",
"# For a list of other default supported criteria, try calling `supported_default_criteria`\n",
"list(Criteria)"
"CriteriaEvalChain.get_supported_default_criteria()"
]
},
{
"cell_type": "markdown",
"id": "c40b1ac7-8f95-48ed-89a2-623bcc746461",
"metadata": {},
"source": [
"## Using Reference Labels\n",
"\n",
"Some criteria (such as correctness) require reference labels to work correctly. To do this, initialize with `requires_reference=True` and call the evaluator with a `reference` string."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "20d8a86b-beba-42ce-b82c-d9e5ebc13686",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"With ground truth: 1\n",
"Without ground truth: 0\n"
]
}
],
"source": [
"evaluator = load_evaluator(\"criteria\", criteria=\"correctness\", requires_reference=True)\n",
"\n",
"# We can even override the model's learned knowledge using ground truth labels\n",
"eval_result = evaluator.evaluate_strings(\n",
" input=\"What is the capital of the US?\",\n",
" prediction=\"Topeka, KS\",\n",
" reference=\"The capital of the US is Topeka, KS, where it permanently moved from Washington D.C. on May 16, 2023\",\n",
")\n",
"print(f'With ground truth: {eval_result[\"score\"]}')"
]
},
{
@@ -159,7 +157,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"id": "bafa0a11-2617-4663-84bf-24df7d0736be",
"metadata": {},
"outputs": [
@@ -167,19 +165,18 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'The criterion asks if the output contains numeric or mathematical information. \\n\\nThe submission is a joke that says, \"I ate some square pie but I don\\'t know the square of pi.\" \\n\\nIn this joke, there is a reference to the mathematical term \"square\" and the mathematical constant \"pi\". \\n\\nTherefore, the submission does contain numeric or mathematical information, and it meets the criterion. \\n\\nY', 'value': 'Y', 'score': 1}\n"
"{'reasoning': 'The criterion is asking if the output contains numeric information. The submission does mention the \"late 16th century,\" which is a numeric information. Therefore, the submission meets the criterion.\\n\\nY', 'value': 'Y', 'score': 1}\n"
]
}
],
"source": [
"custom_criterion = {\"numeric\": \"Does the output contain numeric or mathematical information?\"}\n",
"custom_criterion = {\"numeric\": \"Does the output contain numeric information?\"}\n",
"\n",
"eval_chain = load_evaluator(\n",
" EvaluatorType.CRITERIA,\n",
" llm=eval_llm,\n",
" criteria=custom_criterion,\n",
")\n",
"query = \"Tell me a joke\"\n",
"prediction = \"I ate some square pie but I don't know the square of pi.\"\n",
"eval_result = eval_chain.evaluate_strings(prediction=prediction, input=query)\n",
"print(eval_result)"
]
@@ -197,7 +194,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 7,
"id": "99e3c242-5b12-4bd5-b487-64990a159655",
"metadata": {},
"outputs": [
@@ -223,7 +220,7 @@
" ConstitutionalPrinciple(critique_request='Discuss the assistants last response to the human and whether it is in any way insensitive, sexist, racist, or socially inappropriate.', revision_request='Revise the assistants last response to remove content that is in any way insensitive, sexist, racist, or socially inappropriate.', name='insensitive'))]"
]
},
"execution_count": 9,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -237,23 +234,28 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 8,
"id": "f4061bc0-a88a-4d1c-88a0-8003bba084c7",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'The criterion asks to identify if the assistant\\'s response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.\\n\\nLooking at the assistant\\'s response, it is clear that it is not racist or sexist as it does not discriminate or stereotype based on race or gender. \\n\\nThe response is also not illegal as it does not involve any criminal activity or encourage any form of illegal behavior.\\n\\nThe response is not dangerous as it does not pose a physical threat or risk to anyone\\'s safety.\\n\\nHowever, the assistant\\'s response can be considered harmful and toxic as it uses derogatory language (\"lilly-livered nincompoop\") to describe \\'Will\\'. This can be seen as a form of verbal abuse or insult, which can cause emotional harm.\\n\\nThe response can also be seen as unethical, as it is generally considered inappropriate to insult or belittle someone in this manner.\\n\\nN', 'value': 'N', 'score': 0}\n"
]
"data": {
"text/plain": [
"{'reasoning': 'The criterion asks to identify if the assistant\\'s response is harmful, unethical, racist, sexist, toxic, dangerous, or illegal.\\n\\nThe assistant\\'s response is \"I say that man is a lilly-livered nincompoop\". This is a negative statement about a person named Will. \\n\\nThe term \"lilly-livered\" is an old-fashioned term meaning cowardly, and \"nincompoop\" is a term meaning a foolish or stupid person. \\n\\nWhile this is a negative and potentially insulting statement, it does not appear to be harmful in a physical sense, unethical, racist, sexist, toxic, dangerous, or illegal. \\n\\nIt could be argued that the statement is harmful in a psychological or emotional sense, as it is insulting. However, without further context, it\\'s hard to definitively say if this meets the criteria of being harmful.\\n\\nN',\n",
" 'value': 'N',\n",
" 'score': 0}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator = load_evaluator(\n",
" EvaluatorType.CRITERIA, criteria=PRINCIPLES[\"harmful1\"]\n",
"eval_chain = load_evaluator(\n",
" EvaluatorType.CRITERIA, llm=eval_llm, criteria=PRINCIPLES[\"harmful1\"]\n",
")\n",
"eval_result = evaluator.evaluate_strings(\n",
" prediction=\"I say that man is a lilly-livered nincompoop\",\n",
@@ -276,7 +278,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 10,
"id": "1717162d-f76c-4a14-9ade-168d6fa42b7a",
"metadata": {
"tags": []
@@ -289,7 +291,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 11,
"id": "8727e6f4-aaba-472d-bb7d-09fc1a0f0e2a",
"metadata": {
"tags": []
@@ -304,7 +306,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 12,
"id": "3f6f0d8b-cf42-4241-85ae-35b3ce8152a0",
"metadata": {
"tags": []
@@ -314,7 +316,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'Step 1) Analyze the conciseness criterion: Is the submission concise and to the point?\\nStep 2) The submission provides extraneous information beyond just answering the question directly. It characterizes the question as \"elementary\" and provides reasoning for why the answer is 4. This additional commentary makes the submission not fully concise.\\nStep 3) Therefore, based on the analysis of the conciseness criterion, the submission does not meet the criteria.\\n\\nN', 'value': 'N', 'score': 0}\n"
"{'reasoning': 'Here is my step-by-step reasoning for each criterion:\\n\\nconciseness: The submission is not concise. It contains unnecessary words and phrases like \"That\\'s an elementary question\" and \"you\\'re looking for\". The answer could have simply been stated as \"4\" to be concise.\\n\\nN', 'value': 'N', 'score': 0}\n"
]
}
],
@@ -338,7 +340,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 13,
"id": "22e57704-682f-44ff-96ba-e915c73269c0",
"metadata": {
"tags": []
@@ -362,13 +364,13 @@
"prompt = PromptTemplate.from_template(fstring)\n",
"\n",
"evaluator = load_evaluator(\n",
" \"labeled_criteria\", criteria=\"correctness\", prompt=prompt\n",
" \"criteria\", criteria=\"correctness\", prompt=prompt, requires_reference=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 14,
"id": "5d6b0eca-7aea-4073-a65a-18c3a9cdb5af",
"metadata": {
"tags": []
@@ -378,7 +380,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'reasoning': 'Correctness: No, the response is not correct. The expected response was \"It\\'s 17 now.\" but the response given was \"What\\'s 2+2? That\\'s an elementary question. The answer you\\'re looking for is that two and two is four.\"', 'value': 'N', 'score': 0}\n"
"{'reasoning': 'Correctness: No, the submission is not correct. The expected response was \"It\\'s 17 now.\" but the response given was \"What\\'s 2+2? That\\'s an elementary question. The answer you\\'re looking for is that two and two is four.\"', 'value': 'N', 'score': 0}\n"
]
}
],

View File

@@ -53,7 +53,7 @@
{
"data": {
"text/plain": [
"{'score': 0.11555555555555552}"
"{'score': 12}"
]
},
"execution_count": 3,
@@ -79,7 +79,7 @@
{
"data": {
"text/plain": [
"{'score': 0.0724999999999999}"
"{'score': 4}"
]
},
"execution_count": 4,
@@ -143,7 +143,7 @@
"outputs": [],
"source": [
"jaro_evaluator = load_evaluator(\n",
" \"string_distance\", distance=StringDistance.JARO\n",
" \"string_distance\", distance=StringDistance.JARO, requires_reference=True\n",
")"
]
},

View File

@@ -11,7 +11,7 @@
"\n",
"[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
"\n",
"While PromptLayer does have LLMs that integrate directly with LangChain (eg [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"While PromptLayer does have LLMs that integrate directly with LangChain (eg [`PromptLayerOpenAI`](https://python.langchain.com/docs/modules/model_io/models/llms/integrations/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
"\n",
"See [our docs](https://docs.promptlayer.com/languages/langchain) for more information."
]

View File

@@ -1,220 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1ab83660",
"metadata": {},
"source": [
"# Etherscan Loader\n",
"## Overview\n",
"\n",
"The Etherscan loader use etherscan api to load transacactions histories under specific account on Ethereum Mainnet.\n",
"\n",
"You will need a Etherscan api key to proceed. The free api key has 5 calls per seconds quota.\n",
"\n",
"The loader supports the following six functinalities:\n",
"* Retrieve normal transactions under specifc account on Ethereum Mainet\n",
"* Retrieve internal transactions under specifc account on Ethereum Mainet\n",
"* Retrieve erc20 transactions under specifc account on Ethereum Mainet\n",
"* Retrieve erc721 transactions under specifc account on Ethereum Mainet\n",
"* Retrieve erc1155 transactions under specifc account on Ethereum Mainet\n",
"* Retrieve ethereum balance in wei under specifc account on Ethereum Mainet\n",
"\n",
"\n",
"If the account does not have corresponding transactions, the loader will a list with one document. The content of document is ''.\n",
"\n",
"You can pass differnt filters to loader to access different functionalities we mentioned above:\n",
"* \"normal_transaction\"\n",
"* \"internal_transaction\"\n",
"* \"erc20_transaction\"\n",
"* \"eth_balance\"\n",
"* \"erc721_transaction\"\n",
"* \"erc1155_transaction\"\n",
"The filter is default to normal_transaction\n",
"\n",
"If you have any questions, you can access [Etherscan API Doc](https://etherscan.io/tx/0x0ffa32c787b1398f44303f731cb06678e086e4f82ce07cebf75e99bb7c079c77) or contact me via i@inevitable.tech.\n",
"\n",
"All functions related to transactions histories are restricted 1000 histories maximum because of Etherscan limit. You can use the following parameters to find the transaction histories you need:\n",
"* offset: default to 20. Shows 20 transactions for one time\n",
"* page: default to 1. This controls pagenation.\n",
"* start_block: Default to 0. The transaction histories starts from 0 block.\n",
"* end_block: Default to 99999999. The transaction histories starts from 99999999 block\n",
"* sort: \"desc\" or \"asc\". Set default to \"desc\" to get latest transactions."
]
},
{
"cell_type": "markdown",
"id": "d72d4e22",
"metadata": {},
"source": [
"# Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2911e51e",
"metadata": {},
"outputs": [],
"source": [
"%pip install langchain -q"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "208e2fbf",
"metadata": {},
"outputs": [],
"source": [
"from langchain.document_loaders import EtherscanLoader\n",
"import os"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "5d24b650",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"ETHERSCAN_API_KEY\"] = etherscanAPIKey"
]
},
{
"cell_type": "markdown",
"id": "3bcbb63e",
"metadata": {},
"source": [
"# Create a ERC20 transaction loader"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "d525e6c8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'blockNumber': '13242975',\n",
" 'timeStamp': '1631878751',\n",
" 'hash': '0x366dda325b1a6570928873665b6b418874a7dedf7fee9426158fa3536b621788',\n",
" 'nonce': '28',\n",
" 'blockHash': '0x5469dba1b1e1372962cf2be27ab2640701f88c00640c4d26b8cc2ae9ac256fb6',\n",
" 'from': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3',\n",
" 'contractAddress': '0x2ceee24f8d03fc25648c68c8e6569aa0512f6ac3',\n",
" 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b',\n",
" 'value': '298131000000000',\n",
" 'tokenName': 'ABCHANGE.io',\n",
" 'tokenSymbol': 'XCH',\n",
" 'tokenDecimal': '9',\n",
" 'transactionIndex': '71',\n",
" 'gas': '15000000',\n",
" 'gasPrice': '48614996176',\n",
" 'gasUsed': '5712724',\n",
" 'cumulativeGasUsed': '11507920',\n",
" 'input': 'deprecated',\n",
" 'confirmations': '4492277'}"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"account_address = \"0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b\"\n",
"loader = EtherscanLoader(account_address, filter=\"erc20_transaction\")\n",
"result = loader.load()\n",
"eval(result[0].page_content)"
]
},
{
"cell_type": "markdown",
"id": "2a1ecce0",
"metadata": {},
"source": [
"# Create a normal transaction loader with customized parameters"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "07aa2b6c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"20\n"
]
},
{
"data": {
"text/plain": [
"[Document(page_content=\"{'blockNumber': '1723771', 'timeStamp': '1466213371', 'hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'nonce': '3155', 'blockHash': '0xc2c2207bcaf341eed07f984c9a90b3f8e8bdbdbd2ac6562f8c2f5bfa4b51299d', 'transactionIndex': '5', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13149213761000000000', 'gas': '90000', 'gasPrice': '22655598156', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '126000', 'gasUsed': '21000', 'confirmations': '16011481', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xe00abf5fa83a4b23ee1cc7f07f9dda04ab5fa5efe358b315df8b76699a83efc4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1727090', 'timeStamp': '1466262018', 'hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'nonce': '3267', 'blockHash': '0xc0cff378c3446b9b22d217c2c5f54b1c85b89a632c69c55b76cdffe88d2b9f4d', 'transactionIndex': '20', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11521979886000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3806725', 'gasUsed': '21000', 'confirmations': '16008162', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xd5a779346d499aa722f72ffe7cd3c8594a9ddd91eb7e439e8ba92ceb7bc86928', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1730337', 'timeStamp': '1466308222', 'hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'nonce': '3344', 'blockHash': '0x3a52d28b8587d55c621144a161a0ad5c37dd9f7d63b629ab31da04fa410b2cfa', 'transactionIndex': '1', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9783400526000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '60788', 'gasUsed': '21000', 'confirmations': '16004915', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0xceaffdb3766d2741057d402738eb41e1d1941939d9d438c102fb981fd47a87a4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1733479', 'timeStamp': '1466352351', 'hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'nonce': '3367', 'blockHash': '0x9928661e7ae125b3ae0bcf5e076555a3ee44c52ae31bd6864c9c93a6ebb3f43e', 'transactionIndex': '0', 'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '1570706444000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '16001773', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x3763e6e1228bfeab94191c856412d1bb0a8e6996', 'tx_hash': '0x720d79bf78775f82b40280aae5abfc347643c5f6708d4bf4ec24d65cd01c7121', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1734172', 'timeStamp': '1466362463', 'hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'nonce': '1016', 'blockHash': '0x8a8afe2b446713db88218553cfb5dd202422928e5e0bc00475ed2f37d95649de', 'transactionIndex': '4', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '6322276709000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '105333', 'gasUsed': '21000', 'confirmations': '16001080', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x7a062d25b83bafc9fe6b22bc6f5718bca333908b148676e1ac66c0adeccef647', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1737276', 'timeStamp': '1466406037', 'hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'nonce': '1024', 'blockHash': '0xe117cad73752bb485c3bef24556e45b7766b283229180fcabc9711f3524b9f79', 'transactionIndex': '35', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9976891868000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '3187163', 'gasUsed': '21000', 'confirmations': '15997976', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xa4e89bfaf075abbf48f96700979e6c7e11a776b9040113ba64ef9c29ac62b19b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1740314', 'timeStamp': '1466450262', 'hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'nonce': '1051', 'blockHash': '0x588d17842819a81afae3ac6644d8005c12ce55ddb66c8d4c202caa91d4e8fdbe', 'transactionIndex': '6', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8060633765000000000', 'gas': '90000', 'gasPrice': '22926905859', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '153077', 'gasUsed': '21000', 'confirmations': '15994938', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x6e1a22dcc6e2c77a9451426fb49e765c3c459dae88350e3ca504f4831ec20e8a', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1743384', 'timeStamp': '1466494099', 'hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'nonce': '1068', 'blockHash': '0x997245108c84250057fda27306b53f9438ad40978a95ca51d8fd7477e73fbaa7', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9541921352000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '119650', 'gasUsed': '21000', 'confirmations': '15991868', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xdbfcc15f02269fc3ae27f69e344a1ac4e08948b12b76ebdd78a64d8cafd511ef', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1746405', 'timeStamp': '1466538123', 'hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'nonce': '1092', 'blockHash': '0x3af3966cdaf22e8b112792ee2e0edd21ceb5a0e7bf9d8c168a40cf22deb3690c', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8433783799000000000', 'gas': '90000', 'gasPrice': '25689279306', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15988847', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xbd4f9602f7fff4b8cc2ab6286efdb85f97fa114a43f6df4e6abc88e85b89e97b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1749459', 'timeStamp': '1466582044', 'hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'nonce': '1096', 'blockHash': '0x5fc5d2a903977b35ce1239975ae23f9157d45d7bd8a8f6205e8ce270000797f9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10269065805000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15985793', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x28c327f462cc5013d81c8682c032f014083c6891938a7bdeee85a1c02c3e9ed4', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1752614', 'timeStamp': '1466626168', 'hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'nonce': '1118', 'blockHash': '0x88ef054b98e47504332609394e15c0a4467f84042396717af6483f0bcd916127', 'transactionIndex': '11', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11325836780000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '252000', 'gasUsed': '21000', 'confirmations': '15982638', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc3849e550ca5276d7b3c51fa95ad3ae62c1c164799d33f4388fe60c4e1d4f7d8', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1755659', 'timeStamp': '1466669931', 'hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'nonce': '1133', 'blockHash': '0x2983972217a91343860415d1744c2a55246a297c4810908bbd3184785bc9b0c2', 'transactionIndex': '14', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '13226475343000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '2674679', 'gasUsed': '21000', 'confirmations': '15979593', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xb9f891b7c3d00fcd64483189890591d2b7b910eda6172e3bf3973c5fd3d5a5ae', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1758709', 'timeStamp': '1466713652', 'hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'nonce': '1147', 'blockHash': '0x1660de1e73067251be0109d267a21ffc7d5bde21719a3664c7045c32e771ecf9', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9758447294000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15976543', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd6cce5b184dc7fce85f305ee832df647a9c4640b68e9b79b6f74dc38336d5622', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1761783', 'timeStamp': '1466757809', 'hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'nonce': '1169', 'blockHash': '0x7576961afa4218a3264addd37a41f55c444dd534e9410dbd6f93f7fe20e0363e', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10197126683000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15973469', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xd01545872629956867cbd65fdf5e97d0dde1a112c12e76a1bfc92048d37f650f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1764895', 'timeStamp': '1466801683', 'hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'nonce': '1186', 'blockHash': '0x2e687643becd3c36e0c396a02af0842775e17ccefa0904de5aeca0a9a1aa795e', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '8690241462000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15970357', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x620b91b12af7aac75553b47f15742e2825ea38919cfc8082c0666f404a0db28b', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1767936', 'timeStamp': '1466845682', 'hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'nonce': '1211', 'blockHash': '0xb01d8fd47b3554a99352ac3e5baf5524f314cfbc4262afcfbea1467b2d682898', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11914401843000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15967316', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x758efa27576cd17ebe7b842db4892eac6609e3962a4f9f57b7c84b7b1909512f', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1770911', 'timeStamp': '1466888890', 'hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'nonce': '1212', 'blockHash': '0x79a9de39276132dab8bf00dc3e060f0e8a14f5e16a0ee4e9cc491da31b25fe58', 'transactionIndex': '0', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '10918214730000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '21000', 'gasUsed': '21000', 'confirmations': '15964341', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x9d84470b54ab44b9074b108a0e506cd8badf30457d221e595bb68d63e926b865', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1774044', 'timeStamp': '1466932983', 'hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'nonce': '1240', 'blockHash': '0x69cee390378c3b886f9543fb3a1cb2fc97621ec155f7884564d4c866348ce539', 'transactionIndex': '2', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '9979637283000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '63000', 'gasUsed': '21000', 'confirmations': '15961208', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0x958d85270b58b80f1ad228f716bbac8dd9da7c5f239e9f30d8edeb5bb9301d20', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1777057', 'timeStamp': '1466976422', 'hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'nonce': '1248', 'blockHash': '0xc7cacda0ac38c99f1b9bccbeee1562a41781d2cfaa357e8c7b4af6a49584b968', 'transactionIndex': '7', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '4556173496000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '168000', 'gasUsed': '21000', 'confirmations': '15958195', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xe76ca3603d2f4e7134bdd7a1c3fd553025fc0b793f3fd2a75cd206b8049e74ab', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'}),\n",
" Document(page_content=\"{'blockNumber': '1780120', 'timeStamp': '1467020353', 'hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'nonce': '1266', 'blockHash': '0xfc0e066e5b613239e1a01e6d582e7ab162ceb3ca4f719dfbd1a0c965adcfe1c5', 'transactionIndex': '1', 'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b', 'value': '11890330240000000000', 'gas': '90000', 'gasPrice': '20000000000', 'isError': '0', 'txreceipt_status': '', 'input': '0x', 'contractAddress': '', 'cumulativeGasUsed': '42000', 'gasUsed': '21000', 'confirmations': '15955132', 'methodId': '0x', 'functionName': ''}\", metadata={'from': '0x16545fb79dbee1ad3a7f868b7661c023f372d5de', 'tx_hash': '0xc5ec8cecdc9f5ed55a5b8b0ad79c964fb5c49dc1136b6a49e981616c3e70bbe6', 'to': '0x9dd134d14d1e65f84b706d6f205cd5b1cd03a46b'})]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"loader = EtherscanLoader(\n",
" account_address,\n",
" page=2,\n",
" offset=20,\n",
" start_block=10000,\n",
" end_block=8888888888,\n",
" sort=\"asc\",\n",
")\n",
"result = loader.load()\n",
"result"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -13,7 +13,7 @@
"\n",
"## Prerequisites\n",
"\n",
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify.html) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/modules/agents/tools/integrations/apify.html) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
]
},
{

View File

@@ -44,7 +44,7 @@
},
"outputs": [],
"source": [
"#!pip install py-trello beautifulsoup4 lxml"
"#!pip install py-trello beautifulsoup4"
]
},
{

View File

@@ -36,7 +36,7 @@
"## Deployments\n",
"With Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.\n",
"\n",
"_**Note**: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the `AzureChatOpenAI` class. For docs on Azure chat see [Azure Chat OpenAI documentation](/docs/integrations/chat/azure_chat_openai)._\n",
"_**Note**: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the `AzureChatOpenAI` class. For docs on Azure chat see [Azure Chat OpenAI documentation](/docs/modules/model_io/models/chat/integrations/azure_chat_openai)._\n",
"\n",
"Let's say your deployment name is `text-davinci-002-prod`. In the `openai` Python API, you can specify this deployment with the `engine` parameter. For example:\n",
"\n",

File diff suppressed because one or more lines are too long

View File

@@ -22,7 +22,7 @@ Have `docker desktop` installed.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/airbyte_json).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/airbyte_json.html).
```python
from langchain.document_loaders import AirbyteJSONLoader

View File

@@ -25,4 +25,4 @@ pip install pyairtable
from langchain.document_loaders import AirtableLoader
```
See an [example](/docs/integrations/document_loaders/airtable.html).
See an [example](/docs/modules/data_connection/document_loaders/integrations/airtable.html).

View File

@@ -21,7 +21,7 @@ ALEPH_ALPHA_API_KEY = getpass()
## LLM
See a [usage example](/docs/integrations/llms/aleph_alpha).
See a [usage example](/docs/modules/model_io/models/llms/integrations/aleph_alpha.html).
```python
from langchain.llms import AlephAlpha
@@ -29,7 +29,7 @@ from langchain.llms import AlephAlpha
## Text Embedding Models
See a [usage example](/docs/integrations/text_embedding/aleph_alpha).
See a [usage example](/docs/modules/data_connection/text_embedding/integrations/aleph_alpha.html).
```python
from langchain.embeddings import AlephAlphaSymmetricSemanticEmbedding, AlephAlphaAsymmetricSemanticEmbedding

View File

@@ -6,7 +6,7 @@ API Gateway handles all the tasks involved in accepting and processing up to hun
## LLM
See a [usage example](/docs/integrations/llms/amazon_api_gateway_example).
See a [usage example](/docs/modules/model_io/models/llms/integrations/amazon_api_gateway_example.html).
```python
from langchain.llms import AmazonAPIGateway

View File

@@ -12,4 +12,4 @@ To import this vectorstore:
from langchain.vectorstores import AnalyticDB
```
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/integrations/vectorstores/analyticdb.html)
For a more detailed walkthrough of the AnalyticDB wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/analyticdb.html)

View File

@@ -11,7 +11,7 @@ pip install annoy
## Vectorstore
See a [usage example](/docs/integrations/vectorstores/annoy).
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/annoy.html).
```python
from langchain.vectorstores import Annoy

View File

@@ -32,7 +32,7 @@ You can use the `ApifyWrapper` to run Actors on the Apify platform.
from langchain.utilities import ApifyWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/apify.html).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/modules/agents/tools/integrations/apify.html).
### Loader
@@ -43,4 +43,4 @@ You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
from langchain.document_loaders import ApifyDatasetLoader
```
For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset.html).
For a more detailed walkthrough of this loader, see [this notebook](/docs/modules/data_connection/document_loaders/integrations/apify_dataset.html).

View File

@@ -21,7 +21,7 @@ pip install pymupdf
## Document Loader
See a [usage example](/docs/integrations/document_loaders/arxiv).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/arxiv.html).
```python
from langchain.document_loaders import ArxivLoader
@@ -29,7 +29,7 @@ from langchain.document_loaders import ArxivLoader
## Retriever
See a [usage example](/docs/integrations/retrievers/arxiv).
See a [usage example](/docs/modules/data_connection/retrievers/integrations/arxiv.html).
```python
from langchain.retrievers import ArxivRetriever

View File

@@ -24,4 +24,4 @@ To import this vectorstore:
from langchain.vectorstores import AtlasDB
```
For a more detailed walkthrough of the AtlasDB wrapper, see [this notebook](/docs/integrations/vectorstores/atlas.html)
For a more detailed walkthrough of the AtlasDB wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/atlas.html)

View File

@@ -18,4 +18,4 @@ whether for semantic search or example selection.
from langchain.vectorstores import AwaDB
```
For a more detailed walkthrough of the AwaDB wrapper, see [here](/docs/integrations/vectorstores/awadb.html).
For a more detailed walkthrough of the AwaDB wrapper, see [here](/docs/modules/data_connection/vectorstores/integrations/awadb.html).

View File

@@ -16,9 +16,9 @@ pip install boto3
## Document Loader
See a [usage example for S3DirectoryLoader](/docs/integrations/document_loaders/aws_s3_directory.html).
See a [usage example for S3DirectoryLoader](/docs/modules/data_connection/document_loaders/integrations/aws_s3_directory.html).
See a [usage example for S3FileLoader](/docs/integrations/document_loaders/aws_s3_file.html).
See a [usage example for S3FileLoader](/docs/modules/data_connection/document_loaders/integrations/aws_s3_file.html).
```python
from langchain.document_loaders import S3DirectoryLoader, S3FileLoader

View File

@@ -9,7 +9,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/azlyrics).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/azlyrics.html).
```python
from langchain.document_loaders import AZLyricsLoader

View File

@@ -23,13 +23,13 @@ pip install azure-storage-blob
## Document Loader
See a [usage example for the Azure Blob Storage](/docs/integrations/document_loaders/azure_blob_storage_container.html).
See a [usage example for the Azure Blob Storage](/docs/modules/data_connection/document_loaders/integrations/azure_blob_storage_container.html).
```python
from langchain.document_loaders import AzureBlobStorageContainerLoader
```
See a [usage example for the Azure Files](/docs/integrations/document_loaders/azure_blob_storage_file.html).
See a [usage example for the Azure Files](/docs/modules/data_connection/document_loaders/integrations/azure_blob_storage_file.html).
```python
from langchain.document_loaders import AzureBlobStorageFileLoader

View File

@@ -17,7 +17,7 @@ See [set up instructions](https://learn.microsoft.com/en-us/azure/search/search-
## Retriever
See a [usage example](/docs/integrations/retrievers/azure_cognitive_search).
See a [usage example](/docs/modules/data_connection/retrievers/integrations/azure_cognitive_search.html).
```python
from langchain.retrievers import AzureCognitiveSearchRetriever

View File

@@ -27,7 +27,7 @@ os.environ["OPENAI_API_VERSION"] = "2023-05-15"
## LLM
See a [usage example](/docs/integrations/llms/azure_openai_example).
See a [usage example](/docs/modules/model_io/models/llms/integrations/azure_openai_example.html).
```python
from langchain.llms import AzureOpenAI
@@ -35,7 +35,7 @@ from langchain.llms import AzureOpenAI
## Text Embedding Models
See a [usage example](/docs/integrations/text_embedding/azureopenai)
See a [usage example](/docs/modules/data_connection/text_embedding/integrations/azureopenai.html)
```python
from langchain.embeddings import OpenAIEmbeddings
@@ -43,7 +43,7 @@ from langchain.embeddings import OpenAIEmbeddings
## Chat Models
See a [usage example](/docs/integrations/chat/azure_chat_openai)
See a [usage example](/docs/modules/model_io/models/chat/integrations/azure_chat_openai.html)
```python
from langchain.chat_models import AzureChatOpenAI

View File

@@ -10,7 +10,7 @@ pip install boto3
## LLM
See a [usage example](/docs/integrations/llms/bedrock).
See a [usage example](/docs/modules/model_io/models/llms/integrations/bedrock.html).
```python
from langchain import Bedrock
@@ -18,7 +18,7 @@ from langchain import Bedrock
## Text Embedding Models
See a [usage example](/docs/integrations/text_embedding/bedrock).
See a [usage example](/docs/modules/data_connection/text_embedding/integrations/bedrock.html).
```python
from langchain.embeddings import BedrockEmbeddings
```

View File

@@ -10,7 +10,7 @@ pip install bilibili-api-python
## Document Loader
See a [usage example](/docs/integrations/document_loaders/bilibili).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/bilibili.html).
```python
from langchain.document_loaders import BiliBiliLoader

View File

@@ -14,7 +14,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/blackboard).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/blackboard.html).
```python
from langchain.document_loaders import BlackboardLoader

View File

@@ -21,7 +21,7 @@ To get access to the Brave Search API, you need to [create an account and get an
## Document Loader
See a [usage example](/docs/integrations/document_loaders/brave_search).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/brave_search.html).
```python
from langchain.document_loaders import BraveSearchLoader
@@ -29,7 +29,7 @@ from langchain.document_loaders import BraveSearchLoader
## Tool
See a [usage example](/docs/integrations/tools/brave_search).
See a [usage example](/docs/modules/agents/tools/integrations/brave_search.html).
```python
from langchain.tools import BraveSearch

View File

@@ -18,7 +18,7 @@ pip install cassio
## Vector Store
See a [usage example](/docs/integrations/vectorstores/cassandra).
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/cassandra.html).
```python
from langchain.memory import CassandraChatMessageHistory
@@ -28,7 +28,7 @@ from langchain.memory import CassandraChatMessageHistory
## Memory
See a [usage example](/docs/modules/memory/integrations/cassandra_chat_message_history).
See a [usage example](/docs/modules/memory/integrations/cassandra_chat_message_history.html).
```python
from langchain.memory import CassandraChatMessageHistory

View File

@@ -10,7 +10,7 @@ We need the [API Key](https://docs.chaindesk.ai/api-reference/authentication).
## Retriever
See a [usage example](/docs/integrations/retrievers/chaindesk).
See a [usage example](/docs/modules/data_connection/retrievers/integrations/chaindesk.html).
```python
from langchain.retrievers import ChaindeskRetriever

View File

@@ -18,11 +18,11 @@ whether for semantic search or example selection.
from langchain.vectorstores import Chroma
```
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/integrations/vectorstores/chroma.html)
For a more detailed walkthrough of the Chroma wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/chroma.html)
## Retriever
See a [usage example](/docs/modules/data_connection/retrievers/how_to/self_query/chroma_self_query).
See a [usage example](/docs/modules/data_connection/retrievers/how_to/self_query/chroma_self_query.html).
```python
from langchain.retrievers import SelfQueryRetriever

View File

@@ -25,7 +25,7 @@ from langchain.llms import Clarifai
llm = Clarifai(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
```
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/integrations/llms/clarifai.html).
For more details, the docs on the Clarifai LLM wrapper provide a [detailed walkthrough](/docs/modules/model_io/models/llms/integrations/clarifai.html).
### Text Embedding Models
@@ -37,7 +37,7 @@ There is a Clarifai Embedding model in LangChain, which you can access with:
from langchain.embeddings import ClarifaiEmbeddings
embeddings = ClarifaiEmbeddings(pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)
```
For more details, the docs on the Clarifai Embeddings wrapper provide a [detailed walthrough](/docs/integrations/text_embedding/clarifai.html).
For more details, the docs on the Clarifai Embeddings wrapper provide a [detailed walthrough](/docs/modules/data_connection/text_embedding/integrations/clarifai.html).
## Vectorstore
@@ -49,4 +49,4 @@ You an also add data directly from LangChain as well, and the auto-indexing will
from langchain.vectorstores import Clarifai
clarifai_vector_db = Clarifai.from_texts(user_id=USER_ID, app_id=APP_ID, texts=texts, pat=CLARIFAI_PAT, number_of_docs=NUMBER_OF_DOCS, metadatas = metadatas)
```
For more details, the docs on the Clarifai vector store provide a [detailed walthrough](/docs/integrations/text_embedding/clarifai.html).
For more details, the docs on the Clarifai vector store provide a [detailed walthrough](/docs/modules/data_connection/text_embedding/integrations/clarifai.html).

View File

@@ -15,7 +15,7 @@ Get a [Cohere api key](https://dashboard.cohere.ai/) and set it as an environmen
## LLM
There exists an Cohere LLM wrapper, which you can access with
See a [usage example](/docs/integrations/llms/cohere).
See a [usage example](/docs/modules/model_io/models/llms/integrations/cohere.html).
```python
from langchain.llms import Cohere
@@ -27,11 +27,11 @@ There exists an Cohere Embedding model, which you can access with
```python
from langchain.embeddings import CohereEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/cohere.html)
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/text_embedding/integrations/cohere.html)
## Retriever
See a [usage example](/docs/integrations/retrievers/cohere-reranker).
See a [usage example](/docs/modules/data_connection/retrievers/integrations/cohere-reranker.html).
```python
from langchain.retrievers.document_compressors import CohereRerank

View File

@@ -9,7 +9,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/college_confidential).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/college_confidential.html).
```python
from langchain.document_loaders import CollegeConfidentialLoader

View File

@@ -15,7 +15,7 @@ See [instructions](https://support.atlassian.com/atlassian-account/docs/manage-a
## Document Loader
See a [usage example](/docs/integrations/document_loaders/confluence).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/confluence.html).
```python
from langchain.document_loaders import ConfluenceLoader

View File

@@ -54,4 +54,4 @@ llm = CTransformers(model='marella/gpt-2-ggml', config=config)
See [Documentation](https://github.com/marella/ctransformers#config) for a list of available parameters.
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/ctransformers.html).
For a more detailed walkthrough of this, see [this notebook](/docs/modules/model_io/models/llms/integrations/ctransformers.html).

View File

@@ -12,7 +12,7 @@ We must initialize the loader with the Datadog API key and APP key, and we need
## Document Loader
See a [usage example](/docs/integrations/document_loaders/datadog_logs).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/datadog_logs.html).
```python
from langchain.document_loaders import DatadogLogsLoader

View File

@@ -16,7 +16,7 @@ The DataForSEO utility wraps the API. To import this utility, use:
from langchain.utilities import DataForSeoAPIWrapper
```
For a detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/dataforseo.ipynb).
For a detailed walkthrough of this wrapper, see [this notebook](/docs/modules/agents/tools/integrations/dataforseo.ipynb).
### Tool

View File

@@ -27,4 +27,4 @@ from langchain.vectorstores import DeepLake
```
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](/docs/integrations/vectorstores/deeplake.html)
For a more detailed walkthrough of the Deep Lake wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/deeplake.html)

View File

@@ -11,7 +11,7 @@ Read [instructions](https://docs.diffbot.com/reference/authentication) how to ge
## Document Loader
See a [usage example](/docs/integrations/document_loaders/diffbot).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/diffbot.html).
```python
from langchain.document_loaders import DiffbotLoader

View File

@@ -23,7 +23,7 @@ with Discord. That email will have a download button using which you would be ab
## Document Loader
See a [usage example](/docs/integrations/document_loaders/discord).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/discord.html).
```python
from langchain.document_loaders import DiscordChatLoader

View File

@@ -13,7 +13,7 @@ pip install lxml
## Document Loader
See a [usage example](/docs/integrations/document_loaders/docugami).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/docugami.html).
```python
from langchain.document_loaders import DocugamiLoader

View File

@@ -12,7 +12,7 @@ pip install duckdb
## Document Loader
See a [usage example](/docs/integrations/document_loaders/duckdb).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/duckdb.html).
```python
from langchain.document_loaders import DuckDBLoader

View File

@@ -17,7 +17,7 @@ pip install elasticsearch
>The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval.
See a [usage example](/docs/integrations/retrievers/elastic_search_bm25).
See a [usage example](/docs/modules/data_connection/retrievers/integrations/elastic_search_bm25.html).
```python
from langchain.retrievers import ElasticSearchBM25Retriever

View File

@@ -13,7 +13,7 @@ pip install html2text
## Document Loader
See a [usage example](/docs/integrations/document_loaders/evernote).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/evernote.html).
```python
from langchain.document_loaders import EverNoteLoader

View File

@@ -14,7 +14,7 @@ pip install pandas
## Document Loader
See a [usage example](/docs/integrations/document_loaders/facebook_chat).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/facebook_chat.html).
```python
from langchain.document_loaders import FacebookChatLoader

View File

@@ -14,7 +14,7 @@ The `file key` can be pulled from the URL. https://www.figma.com/file/{filekey}
## Document Loader
See a [usage example](/docs/integrations/document_loaders/figma).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/figma.html).
```python
from langchain.document_loaders import FigmaFileLoader

View File

@@ -12,7 +12,7 @@ pip install GitPython
## Document Loader
See a [usage example](/docs/integrations/document_loaders/git).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/git.html).
```python
from langchain.document_loaders import GitLoader

View File

@@ -8,7 +8,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/gitbook).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/gitbook.html).
```python
from langchain.document_loaders import GitbookLoader

View File

@@ -1,6 +1,6 @@
# Golden
>[Golden](https://golden.com) provides a set of natural language APIs for querying and enrichment using the Golden Knowledge Graph e.g. queries such as: `Products from OpenAI`, `Generative ai companies with series a funding`, and `rappers who invest` can be used to retrieve structured data about relevant entities.
>[Golden](https://golden.com) provides a set of natural language APIs for querying and enrichment using the Golden Knowledge Graph e.g. queries such as: `Products from OpenAI`, `Generative ai companies with series a funding`, and `rappers who invest` can be used to retrieve relevant structured data about relevant entities.
>
>The `golden-query` langchain tool is a wrapper on top of the [Golden Query API](https://docs.golden.com/reference/query-api) which enables programmatic access to these results.
>See the [Golden Query API docs](https://docs.golden.com/reference/query-api) for more information.
@@ -20,7 +20,7 @@ There exists a GoldenQueryAPIWrapper utility which wraps this API. To import thi
from langchain.utilities.golden_query import GoldenQueryAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/golden_query.html).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/modules/agents/tools/integrations/golden_query.html).
### Tool

View File

@@ -13,7 +13,7 @@ pip install google-cloud-bigquery
## Document Loader
See a [usage example](/docs/integrations/document_loaders/google_bigquery).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/google_bigquery.html).
```python
from langchain.document_loaders import BigQueryLoader

View File

@@ -14,12 +14,12 @@ pip install google-cloud-storage
There are two loaders for the `Google Cloud Storage`: the `Directory` and the `File` loaders.
See a [usage example](/docs/integrations/document_loaders/google_cloud_storage_directory).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/google_cloud_storage_directory.html).
```python
from langchain.document_loaders import GCSDirectoryLoader
```
See a [usage example](/docs/integrations/document_loaders/google_cloud_storage_file).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/google_cloud_storage_file.html).
```python
from langchain.document_loaders import GCSFileLoader

View File

@@ -14,7 +14,7 @@ pip install google-api-python-client google-auth-httplib2 google-auth-oauthlib
## Document Loader
See a [usage example and authorizing instructions](/docs/integrations/document_loaders/google_drive.html).
See a [usage example and authorizing instructions](/docs/modules/data_connection/document_loaders/integrations/google_drive.html).
```python

View File

@@ -18,7 +18,7 @@ There exists a GoogleSearchAPIWrapper utility which wraps this API. To import th
from langchain.utilities import GoogleSearchAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_search.html).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/modules/agents/tools/integrations/google_search.html).
### Tool

View File

@@ -59,7 +59,7 @@ So the final answer is: El Palmar, Spain
'El Palmar, Spain'
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/google_serper.html).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/modules/agents/tools/integrations/google_serper.html).
### Tool

View File

@@ -45,4 +45,4 @@ model("Once upon a time, ", callbacks=callbacks)
You can find links to model file downloads in the [pyllamacpp](https://github.com/nomic-ai/pyllamacpp) repository.
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/gpt4all.html)
For a more detailed walkthrough of this, see [this notebook](/docs/modules/model_io/models/llms/integrations/gpt4all.html)

View File

@@ -8,7 +8,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/gutenberg).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/gutenberg.html).
```python
from langchain.document_loaders import GutenbergLoader

View File

@@ -11,7 +11,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/hacker_news).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/hacker_news.html).
```python
from langchain.document_loaders import HNLoader

View File

@@ -16,7 +16,7 @@ pip install psycopg2
## Vector Store
See a [usage example](/docs/integrations/vectorstores/hologres).
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/hologres.html).
```python
from langchain.vectorstores import Hologres

View File

@@ -30,7 +30,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.llms import HuggingFaceHub
```
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](/docs/integrations/llms/huggingface_hub.html)
For a more detailed walkthrough of the Hugging Face Hub wrapper, see [this notebook](/docs/modules/model_io/models/llms/integrations/huggingface_hub.html)
### Embeddings
@@ -47,7 +47,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
```python
from langchain.embeddings import HuggingFaceHubEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/huggingfacehub.html)
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/text_embedding/integrations/huggingfacehub.html)
### Tokenizer

View File

@@ -9,7 +9,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/ifixit).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/ifixit.html).
```python
from langchain.document_loaders import IFixitLoader

View File

@@ -8,7 +8,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/imsdb).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/imsdb.html).
```python

View File

@@ -15,7 +15,7 @@ There exists a Jina Embeddings wrapper, which you can access with
```python
from langchain.embeddings import JinaEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/jina.html)
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/text_embedding/integrations/jina.html)
## Deployment

View File

@@ -20,4 +20,4 @@ To import this vectorstore:
from langchain.vectorstores import LanceDB
```
For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/integrations/vectorstores/lancedb.html)
For a more detailed walkthrough of the LanceDB wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/lancedb.html)

View File

@@ -15,7 +15,7 @@ There exists a LlamaCpp LLM wrapper, which you can access with
```python
from langchain.llms import LlamaCpp
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/llms/llamacpp.html)
For a more detailed walkthrough of this, see [this notebook](/docs/modules/model_io/models/llms/integrations/llamacpp.html)
### Embeddings
@@ -23,4 +23,4 @@ There exists a LlamaCpp Embeddings wrapper, which you can access with
```python
from langchain.embeddings import LlamaCppEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/llamacpp.html)
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/text_embedding/integrations/llamacpp.html)

View File

@@ -28,4 +28,4 @@ To import this vectorstore:
from langchain.vectorstores import Marqo
```
For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/integrations/vectorstores/marqo.html)
For a more detailed walkthrough of the Marqo wrapper and some of its unique features, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/marqo.html)

View File

@@ -23,7 +23,7 @@ pip install -qU mwparserfromhell
## Document Loader
See a [usage example](/docs/integrations/document_loaders/mediawikidump).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/mediawikidump.html).
```python

View File

@@ -10,11 +10,11 @@ First, you need to install a python package.
pip install o365
```
Then follow instructions [here](/docs/integrations/document_loaders/microsoft_onedrive.html).
Then follow instructions [here](/docs/modules/data_connection/document_loaders/integrations/microsoft_onedrive.html).
## Document Loader
See a [usage example](/docs/integrations/document_loaders/microsoft_onedrive).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/microsoft_onedrive.html).
```python

View File

@@ -8,7 +8,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/microsoft_powerpoint).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/microsoft_powerpoint.html).
```python

View File

@@ -8,7 +8,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/microsoft_word).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/microsoft_word.html).
```python

View File

@@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Milvus
```
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/integrations/vectorstores/milvus.html)
For a more detailed walkthrough of the Miluvs wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/milvus.html)

View File

@@ -17,4 +17,4 @@ There exists a modelscope Embeddings wrapper, which you can access with
from langchain.embeddings import ModelScopeEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/modelscope_hub.html)
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/text_embedding/integrations/modelscope_hub.html)

View File

@@ -11,7 +11,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/modern_treasury).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/modern_treasury.html).
```python

View File

@@ -62,4 +62,4 @@ To import this vectorstore:
from langchain.vectorstores import MyScale
```
For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/integrations/vectorstores/myscale.html)
For a more detailed walkthrough of the MyScale wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/myscale.html)

View File

@@ -12,14 +12,14 @@ All instructions are in examples below.
We have two different loaders: `NotionDirectoryLoader` and `NotionDBLoader`.
See a [usage example for the NotionDirectoryLoader](/docs/integrations/document_loaders/notion.html).
See a [usage example for the NotionDirectoryLoader](/docs/modules/data_connection/document_loaders/integrations/notion.html).
```python
from langchain.document_loaders import NotionDirectoryLoader
```
See a [usage example for the NotionDBLoader](/docs/integrations/document_loaders/notiondb.html).
See a [usage example for the NotionDBLoader](/docs/modules/data_connection/document_loaders/integrations/notiondb.html).
```python

View File

@@ -10,7 +10,7 @@ All instructions are in examples below.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/obsidian).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/obsidian.html).
```python

View File

@@ -32,7 +32,7 @@ If you are using a model hosted on `Azure`, you should use different wrapper for
```python
from langchain.llms import AzureOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [this notebook](/docs/integrations/llms/azure_openai_example.html)
For a more detailed walkthrough of the `Azure` wrapper, see [this notebook](/docs/modules/model_io/models/llms/integrations/azure_openai_example.html)
@@ -41,7 +41,7 @@ For a more detailed walkthrough of the `Azure` wrapper, see [this notebook](/doc
```python
from langchain.embeddings import OpenAIEmbeddings
```
For a more detailed walkthrough of this, see [this notebook](/docs/integrations/text_embedding/openai.html)
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/text_embedding/integrations/openai.html)
## Tokenizer
@@ -58,7 +58,7 @@ For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_
## Chain
See a [usage example](/docs/modules/chains/additional/moderation).
See a [usage example](/docs/modules/chains/additional/moderation.html).
```python
from langchain.chains import OpenAIModerationChain
@@ -66,7 +66,7 @@ from langchain.chains import OpenAIModerationChain
## Document Loader
See a [usage example](/docs/integrations/document_loaders/chatgpt_loader).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/chatgpt_loader.html).
```python
from langchain.document_loaders.chatgpt import ChatGPTLoader
@@ -74,7 +74,7 @@ from langchain.document_loaders.chatgpt import ChatGPTLoader
## Retriever
See a [usage example](/docs/integrations/retrievers/chatgpt-plugin).
See a [usage example](/docs/modules/data_connection/retrievers/integrations/chatgpt-plugin.html).
```python
from langchain.retrievers import ChatGPTPluginRetriever

View File

@@ -67,4 +67,4 @@ llm("What is the difference between a duck and a goose? And why there are so man
### Usage
For a more detailed walkthrough of the OpenLLM Wrapper, see the
[example notebook](/docs/integrations/llms/openllm.html)
[example notebook](/docs/modules/model_io/models/llms/integrations/openllm.html)

View File

@@ -18,4 +18,4 @@ To import this vectorstore:
from langchain.vectorstores import OpenSearchVectorSearch
```
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/integrations/vectorstores/opensearch.html)
For a more detailed walkthrough of the OpenSearch wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/opensearch.html)

View File

@@ -29,7 +29,7 @@ There exists a OpenWeatherMapAPIWrapper utility which wraps this API. To import
from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/openweathermap.html).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/modules/agents/tools/integrations/openweathermap.html).
### Tool

View File

@@ -26,4 +26,4 @@ from langchain.vectorstores.pgvector import PGVector
### Usage
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/integrations/vectorstores/pgvector.html)
For a more detailed walkthrough of the PGVector Wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/pgvector.html)

View File

@@ -19,4 +19,4 @@ whether for semantic search or example selection.
from langchain.vectorstores import Pinecone
```
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/integrations/vectorstores/pinecone.html)
For a more detailed walkthrough of the Pinecone vectorstore, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/pinecone.html)

View File

@@ -46,4 +46,4 @@ This LLM is identical to the [OpenAI](/docs/ecosystem/integrations/openai.html)
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/integrations/chat/promptlayer_chatopenai.html) and `PromptLayerOpenAIChat`
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](/docs/modules/model_io/models/chat/integrations/promptlayer_chatopenai.html) and `PromptLayerOpenAIChat`

View File

@@ -16,7 +16,7 @@ view these connections from the dashboard and retrieve data using the server-sid
1. Create an account in the [dashboard](https://dashboard.psychic.dev/).
2. Use the [react library](https://docs.psychic.dev/sidekick-link) to add the Psychic link modal to your frontend react app. You will use this to connect the SaaS apps.
3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/integrations/document_loaders/psychic.html)
3. Once you have created a connection, you can use the `PsychicLoader` by following the [example notebook](/docs/modules/data_connection/document_loaders/integrations/psychic.html)
## Advantages vs Other Document Loaders

View File

@@ -17,4 +17,4 @@ To import this vectorstore:
from langchain.vectorstores import Qdrant
```
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/integrations/vectorstores/qdrant.html)
For a more detailed walkthrough of the Qdrant wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/qdrant.html)

View File

@@ -14,7 +14,7 @@ Make a [Reddit Application](https://www.reddit.com/prefs/apps/) and initialize t
## Document Loader
See a [usage example](/docs/integrations/document_loaders/reddit).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/reddit.html).
```python

View File

@@ -92,7 +92,7 @@ To import this vectorstore:
from langchain.vectorstores import Redis
```
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/integrations/vectorstores/redis.html).
For a more detailed walkthrough of the Redis vectorstore wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/redis.html).
### Retriever

View File

@@ -10,7 +10,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/roam).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/roam.html).
```python
from langchain.document_loaders import RoamLoader

View File

@@ -12,7 +12,7 @@ pip install rockset
## Vector Store
See a [usage example](/docs/integrations/vectorstores/rockset).
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/rockset.html).
```python
from langchain.vectorstores import RocksetDB

View File

@@ -15,7 +15,7 @@ custom LLMs, you can use the `SelfHostedPipeline` parent class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/integrations/llms/runhouse.html)
For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](/docs/modules/model_io/models/llms/integrations/runhouse.html)
## Self-hosted Embeddings
There are several ways to use self-hosted embeddings with LangChain via Runhouse.
@@ -26,4 +26,4 @@ the `SelfHostedEmbedding` class.
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/integrations/text_embedding/self-hosted.html)
For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](/docs/modules/data_connection/text_embedding/integrations/self-hosted.html)

View File

@@ -40,7 +40,7 @@ We have to set up following required parameters of the `SagemakerEndpoint` call:
## LLM
See a [usage example](/docs/integrations/llms/sagemaker).
See a [usage example](/docs/modules/model_io/models/llms/integrations/sagemaker.html).
```python
from langchain import SagemakerEndpoint
@@ -49,7 +49,7 @@ from langchain.llms.sagemaker_endpoint import LLMContentHandler
## Text Embedding Models
See a [usage example](/docs/integrations/text_embedding/sagemaker-endpoint).
See a [usage example](/docs/modules/data_connection/text_embedding/integrations/sagemaker-endpoint.html).
```python
from langchain.embeddings import SagemakerEndpointEmbeddings
from langchain.llms.sagemaker_endpoint import ContentHandlerBase

View File

@@ -17,7 +17,7 @@ There exists a SerpAPI utility which wraps this API. To import this utility:
from langchain.utilities import SerpAPIWrapper
```
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/integrations/tools/serpapi.html).
For a more detailed walkthrough of this wrapper, see [this notebook](/docs/modules/agents/tools/integrations/serpapi.html).
### Tool

View File

@@ -13,7 +13,7 @@ pip install singlestoredb
## Vector Store
See a [usage example](/docs/integrations/vectorstores/singlestoredb).
See a [usage example](/docs/modules/data_connection/vectorstores/integrations/singlestoredb.html).
```python
from langchain.vectorstores import SingleStoreDB

View File

@@ -19,4 +19,4 @@ To import this vectorstore:
from langchain.vectorstores import SKLearnVectorStore
```
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/integrations/vectorstores/sklearn.html).
For a more detailed walkthrough of the SKLearnVectorStore wrapper, see [this notebook](/docs/modules/data_connection/vectorstores/integrations/sklearn.html).

View File

@@ -10,7 +10,7 @@ There isn't any special setup for it.
## Document Loader
See a [usage example](/docs/integrations/document_loaders/slack).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/slack.html).
```python
from langchain.document_loaders import SlackDirectoryLoader

View File

@@ -4,11 +4,11 @@
## Installation and Setup
See [setup instructions](/docs/integrations/document_loaders/spreedly.html).
See [setup instructions](/docs/modules/data_connection/document_loaders/integrations/spreedly.html).
## Document Loader
See a [usage example](/docs/integrations/document_loaders/spreedly).
See a [usage example](/docs/modules/data_connection/document_loaders/integrations/spreedly.html).
```python
from langchain.document_loaders import SpreedlyLoader

Some files were not shown because too many files have changed in this diff Show More