diff --git a/docs/extras/integrations/toolkits/amadeus.ipynb b/docs/extras/integrations/toolkits/amadeus.ipynb index afcaaccfbb9..baa9288dcd8 100644 --- a/docs/extras/integrations/toolkits/amadeus.ipynb +++ b/docs/extras/integrations/toolkits/amadeus.ipynb @@ -1,13 +1,12 @@ { "cells": [ { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ - "# Amadeus Toolkit\n", + "# Amadeus\n", "\n", - "This notebook walks you through connecting LangChain to the Amadeus travel information API\n", + "This notebook walks you through connecting LangChain to the `Amadeus` travel information API\n", "\n", "To use this toolkit, you will need to set up your credentials explained in the [Amadeus for developers getting started overview](https://developers.amadeus.com/get-started/get-started-with-self-service-apis-335). Once you've received a AMADEUS_CLIENT_ID and AMADEUS_CLIENT_SECRET, you can input them as environmental variables below." ] @@ -22,7 +21,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -46,7 +44,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -234,7 +231,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.4" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/azure_cognitive_services.ipynb b/docs/extras/integrations/toolkits/azure_cognitive_services.ipynb index 669519ba2e1..609cc2e4e49 100644 --- a/docs/extras/integrations/toolkits/azure_cognitive_services.ipynb +++ b/docs/extras/integrations/toolkits/azure_cognitive_services.ipynb @@ -4,9 +4,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Azure Cognitive Services Toolkit\n", + "# Azure Cognitive Services\n", "\n", - "This toolkit is used to interact with the Azure Cognitive Services API to achieve some multimodal capabilities.\n", + "This toolkit is used to interact with the `Azure Cognitive Services API` to achieve some multimodal capabilities.\n", "\n", "Currently There are four tools bundled in this toolkit:\n", "- AzureCogsImageAnalysisTool: used to extract caption, objects, tags, and text from images. (Note: this tool is not available on Mac OS yet, due to the dependency on `azure-ai-vision` package, which is only supported on Windows and Linux currently.)\n", @@ -264,9 +264,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.10.12" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/docs/extras/integrations/toolkits/csv.ipynb b/docs/extras/integrations/toolkits/csv.ipynb index 5a0ff426a65..d64484d8ef3 100644 --- a/docs/extras/integrations/toolkits/csv.ipynb +++ b/docs/extras/integrations/toolkits/csv.ipynb @@ -5,24 +5,14 @@ "id": "7094e328", "metadata": {}, "source": [ - "# CSV Agent\n", + "# CSV\n", "\n", - "This notebook shows how to use agents to interact with a csv. It is mostly optimized for question answering.\n", + "This notebook shows how to use agents to interact with data in `CSV` format. It is mostly optimized for question answering.\n", "\n", "**NOTE: this agent calls the Pandas DataFrame agent under the hood, which in turn calls the Python agent, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.**\n", "\n" ] }, - { - "cell_type": "code", - "execution_count": 1, - "id": "827982c7", - "metadata": {}, - "outputs": [], - "source": [ - "from langchain.agents import create_csv_agent" - ] - }, { "cell_type": "code", "execution_count": 2, @@ -32,7 +22,9 @@ "source": [ "from langchain.llms import OpenAI\n", "from langchain.chat_models import ChatOpenAI\n", - "from langchain.agents.agent_types import AgentType" + "from langchain.agents.agent_types import AgentType\n", + "\n", + "from langchain.agents import create_csv_agent" ] }, { @@ -40,9 +32,9 @@ "id": "bd806175", "metadata": {}, "source": [ - "## Using ZERO_SHOT_REACT_DESCRIPTION\n", + "## Using `ZERO_SHOT_REACT_DESCRIPTION`\n", "\n", - "This shows how to initialize the agent using the ZERO_SHOT_REACT_DESCRIPTION agent type. Note that this is an alternative to the above." + "This shows how to initialize the agent using the `ZERO_SHOT_REACT_DESCRIPTION` agent type. Note that this is an alternative to the above." ] }, { @@ -130,9 +122,7 @@ "cell_type": "code", "execution_count": 5, "id": "a96309be", - "metadata": { - "scrolled": false - }, + "metadata": {}, "outputs": [ { "name": "stderr", @@ -305,7 +295,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/document_comparison_toolkit.ipynb b/docs/extras/integrations/toolkits/document_comparison_toolkit.ipynb index 5dbe075516e..7e79d0c3622 100644 --- a/docs/extras/integrations/toolkits/document_comparison_toolkit.ipynb +++ b/docs/extras/integrations/toolkits/document_comparison_toolkit.ipynb @@ -91,9 +91,7 @@ "cell_type": "code", "execution_count": 4, "id": "c4d56c25", - "metadata": { - "scrolled": false - }, + "metadata": {}, "outputs": [ { "name": "stdout", @@ -169,9 +167,7 @@ "cell_type": "code", "execution_count": 6, "id": "6db4c853", - "metadata": { - "scrolled": false - }, + "metadata": {}, "outputs": [ { "name": "stdout", @@ -235,13 +231,7 @@ " \"prompts\": [\n", " \"System: Use the following pieces of context to answer the users question. \\nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\\n----------------\\nAlphabet Inc.\\nCONSOLIDATED STATEMENTS OF INCOME\\n(In millions, except per share amounts, unaudited)\\nQuarter Ended March 31,\\n2022 2023\\nRevenues $ 68,011 $ 69,787 \\nCosts and expenses:\\nCost of revenues 29,599 30,612 \\nResearch and development 9,119 11,468 \\nSales and marketing 5,825 6,533 \\nGeneral and administrative 3,374 3,759 \\nTotal costs and expenses 47,917 52,372 \\nIncome from operations 20,094 17,415 \\nOther income (expense), net (1,160) 790 \\nIncome before income taxes 18,934 18,205 \\nProvision for income taxes 2,498 3,154 \\nNet income $ 16,436 $ 15,051 \\nBasic earnings per share of Class A, Class B, and Class C stock $ 1.24 $ 1.18 \\nDiluted earnings per share of Class A, Class B, and Class C stock $ 1.23 $ 1.17 \\nNumber of shares used in basic earnings per share calculation 13,203 12,781 \\nNumber of shares used in diluted earnings per share calculation 13,351 12,823 \\n6\\n\\nAlphabet Announces First Quarter 2023 Results\\nMOUNTAIN VIEW, Calif. – April 25, 2023 – Alphabet Inc. (NASDAQ: GOOG, GOOGL) today announced financial \\nresults for the quarter ended March 31, 2023 .\\nSundar Pichai, CEO of Alphabet and Google, said: “We are pleased with our business performance in the first \\nquarter, with Search performing well and momentum in Cloud. We introduced important product updates anchored \\nin deep computer science and AI. Our North Star is providing the most helpful answers for our users, and we see \\nhuge opportunities ahead, continuing our long track record of innovation.”\\nRuth Porat, CFO of Alphabet and Google, said: “Resilience in Search and momentum in Cloud resulted in Q1 \\nconsolidated revenues of $69.8 billion, up 3% year over year, or up 6% in constant currency. We remain committed \\nto delivering long-term growth and creating capacity to invest in our most compelling growth areas by re-engineering \\nour cost base.”\\nQ1 2023 financial highlights (unaudited)\\nOur first quarter 2023 results reflect:\\ni.$2.6 billion in charges related to reductions in our workforce and office space; \\nii.a $988 million reduction in depreciation expense from the change in estimated useful life of our servers and \\ncertain network equipment; and\\niii.a shift in the timing of our annual employee stock-based compensation awards resulting in relatively less \\nstock-based compensation expense recognized in the first quarter compared to the remaining quarters of \\nthe ye ar. The shift in timing itself will not affect the amount of stock-based compensation expense over the \\nfull fiscal year 2023.\\nFor further information, please refer to our blog post also filed with the SEC via Form 8-K on April 20, 2023.\\nThe following table summarizes our consolidated financial results for the quarters ended March 31, 2022 and 2023 \\n(in millions, except for per share information and percentages). \\nQuarter Ended March 31,\\n2022 2023\\nRevenues $ 68,011 $ 69,787 \\nChange in revenues year over year 23 % 3 %\\nChange in constant currency revenues year over year(1) 26 % 6 %\\nOperating income $ 20,094 $ 17,415 \\nOperating margin 30 % 25 %\\nOther income (expense), net $ (1,160) $ 790 \\nNet income $ 16,436 $ 15,051 \\nDiluted EPS $ 1.23 $ 1.17 \\n(1) Non-GAAP measure. See the table captioned “Reconciliation from GAAP revenues to non-GAAP constant currency \\nrevenues and GAAP percentage change in revenues to non-GAAP percentage change in constant currency revenues” for \\nmore details.\\n\\nQ1 2023 supplemental information (in millions, except for number of employees; unaudited)\\nRevenues, T raffic Acquisition Costs (TAC), and number of employees\\nQuarter Ended March 31,\\n2022 2023\\nGoogle Search & other $ 39,618 $ 40,359 \\nYouTube ads 6,869 6,693 \\nGoogle Network 8,174 7,496 \\nGoogle advertising 54,661 54,548 \\nGoogle other 6,811 7,413 \\nGoogle Services total 61,472 61,961 \\nGoogle Cloud 5,821 7,454 \\nOther Bets 440 288 \\nHedging gains (losses) 278 84 \\nTotal revenues $ 68,011 $ 69,787 \\nTotal TAC $ 11,990 $ 11,721 \\nNumber of employees(1) 163,906 190,711 \\n(1) As of March 31, 2023, the number of employees includes almost all of the employees affected by the reduction of our \\nworkforce. We expect most of those affected will no longer be reflected in our headcount by the end of the second quarter \\nof 2023, subject to local law and consultation requirements.\\nSegment Operating Results\\nReflecting DeepMind’s increasing collaboration with Google Services, Google Cloud, and Other Bets, beginning in \\nthe first quarter of 2023 DeepMind is reported as part of Alphabet’s unallocated corporate costs instead of within \\nOther Bets. Additionally, beginning in the first quarter of 2023, we updated and simplified our cost allocation \\nmethodologies to provide our business leaders with increased transparency for decision-making . Prior periods have \\nbeen recast to reflect the revised presentation and are shown in Recast Historical Segment Results below .\\nAs announced on April 20, 2023 , we are bringing together part of Google Research (the Brain Team) and DeepMind \\nto significantly accelerate our progress in AI. This change does not affect first quarter reporting. The group, called \\nGoogle DeepMind, will be reported within Alphabet's unallocated corporate costs beginning in the second quarter of \\n2023.\\nQuarter Ended March 31,\\n2022 2023\\n(recast)\\nOperating income (loss):\\nGoogle Services $ 21,973 $ 21,737 \\nGoogle Cloud (706) 191 \\nOther Bets (835) (1,225) \\nCorporate costs, unallocated(1) (338) (3,288) \\nTotal income from operations $ 20,094 $ 17,415 \\n(1)Hedging gains (losses) related to revenue included in unallocated corporate costs were $278 million and $84 million for the \\nthree months ended March 31, 2022 and 2023 , respectively. For the three months ended March 31, 2023, unallocated \\ncorporate costs include charges related to the reductions in our workforce and office space totaling $2.5 billion . \\n2\\n\\nSegment results\\nThe following table presents our segment revenues and operating income (loss) (in millions; unaudited):\\nQuarter Ended March 31,\\n2022 2023\\n(recast)\\nRevenues:\\nGoogle Services $ 61,472 $ 61,961 \\nGoogle Cloud 5,821 7,454 \\nOther Bets 440 288 \\nHedging gains (losses) 278 84 \\nTotal revenues $ 68,011 $ 69,787 \\nOperating income (loss):\\nGoogle Services $ 21,973 $ 21,737 \\nGoogle Cloud (706) 191 \\nOther Bets (835) (1,225) \\nCorporate costs, unallocated (338) (3,288) \\nTotal income from operations $ 20,094 $ 17,415 \\nWe report our segment results as Google Services, Google Cloud, and Other Bets:\\n•Google Services includes products and services such as ads, Android, Chrome, hardware, Google Maps, \\nGoogle Play, Search, and YouTube. Google Services generates revenues primarily from advertising; sales \\nof apps and in-app purchases, and hardware; and fees received for subscription-based products such as \\nYouTube Premium and YouTube TV.\\n•Google Cloud includes infrastructure and platform services, collaboration tools, and other services for \\nenterprise customers. Google Cloud generates revenues from fees received for Google Cloud Platform \\nservices, Google Workspace communication and collaboration tools, and other enterprise services.\\n•Other Bets is a combination of multiple operating segments that are not individually material. Revenues \\nfrom Other Bets are generated primarily from the sale of health technology and internet services.\\nAfter the segment reporting changes discussed above, unallocated corporate costs primarily include AI-focused \\nshared R&D activities; corporate initiatives such as our philanthropic activities; and corporate shared costs such as \\nfinance, certain human resource costs, and legal, including certain fines and settlements. In the first quarter of 2023, \\nunallocated corporate costs also include charges associated with reductions in our workforce and office space. \\nAdditionally, hedging gains (losses) related to revenue are included in unallocated corporate costs.\\nRecast Historical Segment Results\\nRecast historical segment results are as follows (in millions; unaudited):\\nQuarter Fiscal Year\\nRecast Historical Results\\nQ1 2022 Q2 2022 Q3 2022 Q4 2022 2021 2022\\nOperating income (loss):\\nGoogle Services $ 21,973 $ 21,621 $ 18,883 $ 20,222 $ 88,132 $ 82,699 \\nGoogle Cloud (706) (590) (440) (186) (2,282) (1,922) \\nOther Bets (835) (1,339) (1,225) (1,237) (4,051) (4,636) \\nCorporate costs, unallocated(1) (338) (239) (83) (639) (3,085) (1,299) \\nTotal income from operations $ 20,094 $ 19,453 $ 17,135 $ 18,160 $ 78,714 $ 74,842 \\n(1)Includes hedging gains (losses); in fiscal years 2021 and 2022 hedging gains of $149 million and $2.0 billion, respectively.\\n8\\nHuman: What was Alphabet's revenue?\"\n", " ]\n", - "}\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ + "}\n", "\u001b[36;1m\u001b[1;3m[llm/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 3:tool:alphabet-earnings > 4:chain:RetrievalQA > 5:chain:StuffDocumentsChain > 6:chain:LLMChain > 7:llm:ChatOpenAI] [1.61s] Exiting LLM run with output:\n", "\u001b[0m{\n", " \"generations\": [\n", @@ -299,13 +289,7 @@ " \"prompts\": [\n", " \"System: Use the following pieces of context to answer the users question. \\nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\\n----------------\\nS U M M A R Y H I G H L I G H T S \\n(1) Excludes SBC (stock -based compensation).\\n(2) Free cash flow = operating cash flow less capex.\\n(3) Includes cash, cash equivalents and investments.Profitability 11.4% operating margin in Q1\\n$2.7B GAAP operating income in Q1\\n$2.5B GAAP net income in Q1\\n$2.9B non -GAAP net income1in Q1In the current macroeconomic environment, we see this year as a unique \\nopportunity for Tesla. As many carmakers are working through challenges with the \\nunit economics of their EV programs, we aim to leverage our position as a cost \\nleader. We are focused on rapidly growing production, investments in autonomy \\nand vehicle software, and remaining on track with our growth investments.\\nOur near -term pricing strategy considers a long -term view on per vehicle \\nprofitability given the potential lifetime value of a Tesla vehicle through autonomy, \\nsupercharging, connectivity and service. We expect that our product pricing will \\ncontinue to evolve, upwards or downwards, depending on a number of factors.\\nAlthough we implemented price reductions on many vehicle models across regions \\nin the first quarter, our operating margins reduced at a manageable rate. We \\nexpect ongoing cost reduction of our vehicles, including improved production \\nefficiency at our newest factories and lower logistics costs, and remain focused on \\noperating leverage as we scale.\\nWe are rapidly growing energy storage production capacity at our Megafactory in \\nLathrop and we recently announced a new Megafactory in Shanghai. We are also \\ncontinuing to execute on our product roadmap, including Cybertruck, our next \\ngeneration vehicle platform, autonomy and other AI enabled products. \\nOur balance sheet and net income enable us to continue to make these capital \\nexpenditures in line with our future growth. In this environment, we believe it \\nmakes sense to push forward to ensure we lay a proper foundation for the best \\npossible future.Cash Operating cash flow of $2.5B\\nFree cash flow2of $0.4B in Q1\\n$0.2B increase in our cash and investments3in Q1 to $22.4B\\nOperations Cybertruck factory tooling on track; producing Alpha versions\\nModel Y was the best -selling vehicle in Europe in Q1\\nModel Y was the best -selling vehicle in the US in Q1 (ex -pickups)\\n\\n01234O T H E R H I G H L I G H T S\\n9Services & Other gross margin\\nEnergy Storage deployments (GWh)Energy Storage\\nEnergy storage deployments increased by 360% YoY in Q1 to 3.9 GWh, the highest \\nlevel of deployments we have achieved due to ongoing Megafactory ramp. The ramp of our 40 GWh Megapack factory in Lathrop, California has been successful with still more room to reach full capacity. This Megapack factory will be the first of many. We recently announced our second 40 GWh Megafactory, this time in Shanghai, with construction starting later this year. \\nSolar\\nSolar deployments increased by 40% YoY in Q1 to 67 MW, but declined sequentially in \\nthe quarter, predominantly due to volatile weather and other factors. In addition, the solar industry has been impacted by supply chain challenges.\\nServices and Other\\nBoth revenue and gross profit from Services and Other reached an all -time high in Q1 \\n2023. Within this business division, growth of used vehicle sales remained strong YoY and had healthy margins. Supercharging, while still a relatively small part of the business, continued to grow as we gradually open up the network to non- Tesla \\nvehicles. \\n-4%-2%0%2%4%6%8%\\nQ3'21 Q4'21 Q1'22 Q2'22 Q3'22 Q4'22 Q1'23\\n\\nIn millions of USD or shares as applicable, except per share data Q1-2022 Q2-2022 Q3-2022 Q4-2022 Q1-2023\\nREVENUES\\nAutomotive sales 15,514 13,670 17,785 20,241 18,878 \\nAutomotive regulatory credits 679 344 286 467 521 \\nAutomotive leasing 668 588 621 599 564 \\nTotal automotive revenues 16,861 14,602 18,692 21,307 19,963 \\nEnergy generation and storage 616 866 1,117 1,310 1,529 \\nServices and other 1,279 1,466 1,645 1,701 1,837 \\nTotal revenues 18,756 16,934 21,454 24,318 23,329 \\nCOST OF REVENUES\\nAutomotive sales 10,914 10,153 13,099 15,433 15,422 \\nAutomotive leasing 408 368 381 352 333 \\nTotal automotive cost of revenues 11,322 10,521 13,480 15,785 15,755 \\nEnergy generation and storage 688 769 1,013 1,151 1,361 \\nServices and other 1,286 1,410 1,579 1,605 1,702 \\nTotal cost of revenues 13,296 12,700 16,072 18,541 18,818 \\nGross profit 5,460 4,234 5,382 5,777 4,511 \\nOPERATING EXPENSES\\nResearch and development 865 667 733 810 771 \\nSelling, general and administrative 992 961 961 1,032 1,076 \\nRestructuring and other — 142 — 34 —\\nTotal operating expenses 1,857 1,770 1,694 1,876 1,847 \\nINCOME FROM OPERATIONS 3,603 2,464 3,688 3,901 2,664 \\nInterest income 28 26 86 157 213 \\nInterest expense (61) (44) (53) (33) (29)\\nOther income (expense), net 56 28 (85) (42) (48)\\nINCOME BEFORE INCOME TAXES 3,626 2,474 3,636 3,983 2,800 \\nProvision for income taxes 346 205 305 276 261 \\nNET INCOME 3,280 2,269 3,331 3,707 2,539 \\nNet (loss) income attributable to noncontrolling interests and redeemable noncontrolling interests in \\nsubsidiaries(38) 10 39 20 26 \\nNET INCOME ATTRIBUTABLE TO COMMON STOCKHOLDERS 3,318 2,259 3,292 3,687 2,513 \\nNet income per share of common stock attributable to common stockholders(1)\\nBasic $ 1.07 $ 0.73 $ 1.05 $ 1.18 $ 0.80 \\nDiluted $ 0.95 $ 0.65 $ 0.95 $ 1.07 $ 0.73 \\nWeighted average shares used in computing net income per share of common stock(1)\\nBasic 3,103 3,111 3,146 3,160 3,166\\nDiluted 3,472 3,464 3,468 3,471 3,468\\nS T A T E M E N T O F O P E R A T I O N S\\n(Unaudited)\\n23 (1) Prior period results have been retroactively adjusted to reflect the three -for-one stock split effected in the form of a stock d ividend in August 2022.\\n\\nQ1-2022 Q2-2022 Q3-2022 Q4-2022 Q1-2023 YoY\\nModel S/X production 14,218 16,411 19,935 20,613 19,437 37%\\nModel 3/Y production 291,189 242,169 345,988 419,088 421,371 45%\\nTotal production 305,407 258,580 365,923 439,701 440,808 44%\\nModel S/X deliveries 14,724 16,162 18,672 17,147 10,695 -27%\\nModel 3/Y deliveries 295,324 238,533 325,158 388,131 412,180 40%\\nTotal deliveries 310,048 254,695 343,830 405,278 422,875 36%\\nof which subject to operating lease accounting 12,167 9,227 11,004 15,184 22,357 84%\\nTotal end of quarter operating lease vehicle count 128,402 131,756 135,054 140,667 153,988 20%\\nGlobal vehicle inventory (days of supply )(1)3 4 8 13 15 400%\\nSolar deployed (MW) 48 106 94 100 67 40%\\nStorage deployed (MWh) 846 1,133 2,100 2,462 3,889 360%\\nTesla locations(2)787 831 903 963 1,000 27%\\nMobile service fleet 1,372 1,453 1,532 1,584 1,692 23%\\nSupercharger stations 3,724 3,971 4,283 4,678 4,947 33%\\nSupercharger connectors 33,657 36,165 38,883 42,419 45,169 34%\\n(1)Days of supply is calculated by dividing new car ending inventory by the relevant quarter’s deliveries and using 75 trading days (aligned with Automotive News definition).\\n(2)Starting in Q1 -2023, we revised our methodology for reporting Tesla’s physical footprint. This count now includes all sales, del ivery, body shop and service locations globally. O P E R A T I O N A L S U M MA R Y\\n(Unaudited)\\n6\\nHuman: What was Tesla's revenue?\"\n", " ]\n", - "}\n" - ] - }, - { - "name": "stdout", - "output_type": "stream", - "text": [ + "}\n", "\u001b[36;1m\u001b[1;3m[llm/end]\u001b[0m \u001b[1m[1:chain:AgentExecutor > 8:tool:tesla-earnings > 9:chain:RetrievalQA > 10:chain:StuffDocumentsChain > 11:chain:LLMChain > 12:llm:ChatOpenAI] [1.17s] Exiting LLM run with output:\n", "\u001b[0m{\n", " \"generations\": [\n", @@ -427,7 +411,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/github.ipynb b/docs/extras/integrations/toolkits/github.ipynb index bcaa5abd42b..36d13cb7f71 100644 --- a/docs/extras/integrations/toolkits/github.ipynb +++ b/docs/extras/integrations/toolkits/github.ipynb @@ -4,9 +4,10 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Github Toolkit\n", + "# Github\n", "\n", - "The Github toolkit contains tools that enable an LLM agent to interact with a github repository. The tools are a wrapper for the [PyGitHub](https://github.com/PyGithub/PyGithub) library. \n", + "The `Github` toolkit contains tools that enable an LLM agent to interact with a github repository. \n", + "The tool is a wrapper for the [PyGitHub](https://github.com/PyGithub/PyGithub) library. \n", "\n", "## Quickstart\n", "1. Install the pygithub library\n", @@ -38,7 +39,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 1. Install the pygithub library" + "## Setup" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 1. Install the `pygithub` library " ] }, { @@ -58,7 +66,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## 2. Create a Github App\n", + "### 2. Create a Github App\n", "\n", "[Follow the instructions here](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app) to create and register a Github app. Make sure your app has the following [repository permissions:](https://docs.github.com/en/rest/overview/permissions-required-for-github-apps?apiVersion=2022-11-28)\n", "* Commit statuses (read only)\n", @@ -71,7 +79,7 @@ "\n", "Once the app has been registered, add it to the repository you wish the bot to act upon.\n", "\n", - "## 3. Set Environmental Variables\n", + "### 3. Set Environmental Variables\n", "\n", "Before initializing your agent, the following environmental variables need to be set:\n", "\n", @@ -86,7 +94,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Example Usage- Simple Agent" + "## Example: Simple Agent" ] }, { @@ -212,7 +220,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Example Usage- Advanced Agent\n", + "## Example: Advanced Agent\n", "\n", "If your agent does not need to use all 8 tools, you can build tools individually to use. For this example, we'll make an agent that does not use the create_file, delete_file or create_pull_request tools, but can also use duckduckgo-search." ] @@ -375,9 +383,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.16" + "version": "3.10.12" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/docs/extras/integrations/toolkits/gmail.ipynb b/docs/extras/integrations/toolkits/gmail.ipynb index e2d6fee59bf..d24ded1f360 100644 --- a/docs/extras/integrations/toolkits/gmail.ipynb +++ b/docs/extras/integrations/toolkits/gmail.ipynb @@ -4,9 +4,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Gmail Toolkit\n", + "# Gmail\n", "\n", - "This notebook walks through connecting a LangChain email to the Gmail API.\n", + "This notebook walks through connecting a LangChain email to the `Gmail API`.\n", "\n", "To use this toolkit, you will need to set up your credentials explained in the [Gmail API docs](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application). Once you've downloaded the `credentials.json` file, you can start using the Gmail API. Once this is done, we'll install the required libraries." ] @@ -226,7 +226,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.2" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/index.mdx b/docs/extras/integrations/toolkits/index.mdx index 164addc708a..65f3854d317 100644 --- a/docs/extras/integrations/toolkits/index.mdx +++ b/docs/extras/integrations/toolkits/index.mdx @@ -2,7 +2,10 @@ sidebar_position: 0 --- -# Agent toolkits +# Agents & Toolkits + +Agents and Toolkits are placed in the same directory because they are always used together. + import DocCardList from "@theme/DocCardList"; diff --git a/docs/extras/integrations/toolkits/jira.ipynb b/docs/extras/integrations/toolkits/jira.ipynb index 9d32bab37c6..39480eeb588 100644 --- a/docs/extras/integrations/toolkits/jira.ipynb +++ b/docs/extras/integrations/toolkits/jira.ipynb @@ -1,15 +1,15 @@ { "cells": [ { - "attachments": {}, "cell_type": "markdown", "id": "245a954a", "metadata": {}, "source": [ "# Jira\n", "\n", - "This notebook goes over how to use the Jira tool.\n", - "The Jira tool allows agents to interact with a given Jira instance, performing actions such as searching for issues and creating issues, the tool wraps the atlassian-python-api library, for more see: https://atlassian-python-api.readthedocs.io/jira.html\n", + "This notebook goes over how to use the `Jira` toolkit.\n", + "\n", + "The `Jira` toolkit allows agents to interact with a given Jira instance, performing actions such as searching for issues and creating issues, the tool wraps the atlassian-python-api library, for more see: https://atlassian-python-api.readthedocs.io/jira.html\n", "\n", "To use this tool, you must first set as environment variables:\n", " JIRA_API_TOKEN\n", @@ -22,12 +22,12 @@ "execution_count": null, "id": "961b3689", "metadata": { + "ExecuteTime": { + "end_time": "2023-04-17T10:21:20.168639Z", + "start_time": "2023-04-17T10:21:18.698672Z" + }, "vscode": { "languageId": "shellscript" - }, - "ExecuteTime": { - "start_time": "2023-04-17T10:21:18.698672Z", - "end_time": "2023-04-17T10:21:20.168639Z" } }, "outputs": [], @@ -41,8 +41,8 @@ "id": "34bb5968", "metadata": { "ExecuteTime": { - "start_time": "2023-04-17T10:21:22.911233Z", - "end_time": "2023-04-17T10:21:23.730922Z" + "end_time": "2023-04-17T10:21:23.730922Z", + "start_time": "2023-04-17T10:21:22.911233Z" } }, "outputs": [], @@ -58,21 +58,24 @@ { "cell_type": "code", "execution_count": 4, + "id": "b3050b55", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-17T10:22:42.505412Z", + "start_time": "2023-04-17T10:22:42.499447Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, "outputs": [], "source": [ "os.environ[\"JIRA_API_TOKEN\"] = \"abc\"\n", "os.environ[\"JIRA_USERNAME\"] = \"123\"\n", "os.environ[\"JIRA_INSTANCE_URL\"] = \"https://jira.atlassian.com\"\n", "os.environ[\"OPENAI_API_KEY\"] = \"xyz\"" - ], - "metadata": { - "collapsed": false, - "ExecuteTime": { - "start_time": "2023-04-17T10:22:42.499447Z", - "end_time": "2023-04-17T10:22:42.505412Z" - } - }, - "id": "b3050b55" + ] }, { "cell_type": "code", @@ -80,8 +83,8 @@ "id": "ac4910f8", "metadata": { "ExecuteTime": { - "start_time": "2023-04-17T10:22:44.664481Z", - "end_time": "2023-04-17T10:22:44.720538Z" + "end_time": "2023-04-17T10:22:44.720538Z", + "start_time": "2023-04-17T10:22:44.664481Z" } }, "outputs": [], @@ -97,6 +100,17 @@ { "cell_type": "code", "execution_count": 9, + "id": "d5461370", + "metadata": { + "ExecuteTime": { + "end_time": "2023-04-17T10:23:38.121883Z", + "start_time": "2023-04-17T10:23:33.662454Z" + }, + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, "outputs": [ { "name": "stdout", @@ -117,7 +131,9 @@ }, { "data": { - "text/plain": "'A new issue has been created in project PW with the summary \"Make more fried rice\" and description \"Reminder to make more fried rice\".'" + "text/plain": [ + "'A new issue has been created in project PW with the summary \"Make more fried rice\" and description \"Reminder to make more fried rice\".'" + ] }, "execution_count": 9, "metadata": {}, @@ -126,20 +142,12 @@ ], "source": [ "agent.run(\"make a new issue in project PW to remind me to make more fried rice\")" - ], - "metadata": { - "collapsed": false, - "ExecuteTime": { - "start_time": "2023-04-17T10:23:33.662454Z", - "end_time": "2023-04-17T10:23:38.121883Z" - } - }, - "id": "d5461370" + ] } ], "metadata": { "kernelspec": { - "display_name": ".venv", + "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, @@ -153,7 +161,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.7" + "version": "3.10.12" }, "vscode": { "interpreter": { @@ -163,4 +171,4 @@ }, "nbformat": 4, "nbformat_minor": 5 -} \ No newline at end of file +} diff --git a/docs/extras/integrations/toolkits/json.ipynb b/docs/extras/integrations/toolkits/json.ipynb index ec34583dd61..89614101359 100644 --- a/docs/extras/integrations/toolkits/json.ipynb +++ b/docs/extras/integrations/toolkits/json.ipynb @@ -5,9 +5,10 @@ "id": "85fb2c03-ab88-4c8c-97e3-a7f2954555ab", "metadata": {}, "source": [ - "# JSON Agent\n", + "# JSON\n", "\n", - "This notebook showcases an agent designed to interact with large JSON/dict objects. This is useful when you want to answer questions about a JSON blob that's too large to fit in the context window of an LLM. The agent is able to iteratively explore the blob to find what it needs to answer the user's question.\n", + "This notebook showcases an agent interacting with large `JSON/dict` objects. \n", + "This is useful when you want to answer questions about a JSON blob that's too large to fit in the context window of an LLM. The agent is able to iteratively explore the blob to find what it needs to answer the user's question.\n", "\n", "In the below example, we are using the OpenAPI spec for the OpenAI API, which you can find [here](https://github.com/openai/openai-openapi/blob/master/openapi.yaml).\n", "\n", @@ -179,7 +180,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.9" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/multion.ipynb b/docs/extras/integrations/toolkits/multion.ipynb index 3382af62104..5502d3e7044 100644 --- a/docs/extras/integrations/toolkits/multion.ipynb +++ b/docs/extras/integrations/toolkits/multion.ipynb @@ -1,15 +1,14 @@ { "cells": [ { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ - "# MultiOn Toolkit\n", + "# MultiOn\n", "\n", - "This notebook walks you through connecting LangChain to the MultiOn Client in your browser\n", + "This notebook walks you through connecting LangChain to the `MultiOn` Client in your browser\n", "\n", - "To use this toolkit, you will need to add MultiOn Extension to your browser as explained in the [MultiOn for Chrome](https://multion.notion.site/Download-MultiOn-ddddcfe719f94ab182107ca2612c07a5)." + "To use this toolkit, you will need to add `MultiOn Extension` to your browser as explained in the [MultiOn for Chrome](https://multion.notion.site/Download-MultiOn-ddddcfe719f94ab182107ca2612c07a5)." ] }, { @@ -47,7 +46,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -127,7 +125,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.4" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/office365.ipynb b/docs/extras/integrations/toolkits/office365.ipynb index 704ceec4e16..350bcc0495f 100644 --- a/docs/extras/integrations/toolkits/office365.ipynb +++ b/docs/extras/integrations/toolkits/office365.ipynb @@ -1,13 +1,12 @@ { "cells": [ { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ - "# Office365 Toolkit\n", + "# Office365\n", "\n", - "This notebook walks through connecting LangChain to Office365 email and calendar.\n", + "This notebook walks through connecting LangChain to `Office365` email and calendar.\n", "\n", "To use this toolkit, you will need to set up your credentials explained in the [Microsoft Graph authentication and authorization overview](https://learn.microsoft.com/en-us/graph/auth/). Once you've received a CLIENT_ID and CLIENT_SECRET, you can input them as environmental variables below." ] @@ -23,7 +22,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -42,7 +40,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ @@ -238,7 +235,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/openapi.ipynb b/docs/extras/integrations/toolkits/openapi.ipynb index 3e5e4d13646..f97532e36d8 100644 --- a/docs/extras/integrations/toolkits/openapi.ipynb +++ b/docs/extras/integrations/toolkits/openapi.ipynb @@ -5,9 +5,9 @@ "id": "85fb2c03-ab88-4c8c-97e3-a7f2954555ab", "metadata": {}, "source": [ - "# OpenAPI agents\n", + "# OpenAPI\n", "\n", - "We can construct agents to consume arbitrary APIs, here APIs conformant to the OpenAPI/Swagger specification." + "We can construct agents to consume arbitrary APIs, here APIs conformant to the `OpenAPI`/`Swagger` specification." ] }, { @@ -271,9 +271,7 @@ "cell_type": "code", "execution_count": 9, "id": "38762cc0", - "metadata": { - "scrolled": false - }, + "metadata": {}, "outputs": [ { "name": "stdout", @@ -449,9 +447,7 @@ "cell_type": "code", "execution_count": 28, "id": "3a9cc939", - "metadata": { - "scrolled": false - }, + "metadata": {}, "outputs": [ { "name": "stdout", @@ -773,7 +769,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/openapi_nla.ipynb b/docs/extras/integrations/toolkits/openapi_nla.ipynb index c2f3b90e41f..a731e282d09 100644 --- a/docs/extras/integrations/toolkits/openapi_nla.ipynb +++ b/docs/extras/integrations/toolkits/openapi_nla.ipynb @@ -7,7 +7,9 @@ "source": [ "# Natural Language APIs\n", "\n", - "Natural Language API Toolkits (NLAToolkits) permit LangChain Agents to efficiently plan and combine calls across endpoints. This notebook demonstrates a sample composition of the Speak, Klarna, and Spoonacluar APIs.\n", + "`Natural Language API` Toolkits (`NLAToolkits`) permit LangChain Agents to efficiently plan and combine calls across endpoints. \n", + "\n", + "This notebook demonstrates a sample composition of the `Speak`, `Klarna`, and `Spoonacluar` APIs.\n", "\n", "For a detailed walkthrough of the OpenAPI chains wrapped within the NLAToolkit, see the [OpenAPI Operation Chain](/docs/use_cases/apis/openapi.html) notebook.\n", "\n", @@ -182,7 +184,7 @@ "id": "c61d92a8", "metadata": {}, "source": [ - "### Using Auth + Adding more Endpoints\n", + "### Use Auth and add more Endpoints\n", "\n", "Some endpoints may require user authentication via things like access tokens. Here we show how to pass in the authentication information via the `Requests` wrapper object.\n", "\n", @@ -420,7 +422,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/pandas.ipynb b/docs/extras/integrations/toolkits/pandas.ipynb index b54b0076c96..000eaa0dcf1 100644 --- a/docs/extras/integrations/toolkits/pandas.ipynb +++ b/docs/extras/integrations/toolkits/pandas.ipynb @@ -5,11 +5,11 @@ "id": "c81da886", "metadata": {}, "source": [ - "# Pandas Dataframe Agent\n", + "# Pandas Dataframe\n", "\n", - "This notebook shows how to use agents to interact with a pandas dataframe. It is mostly optimized for question answering.\n", + "This notebook shows how to use agents to interact with a `Pandas DataFrame`. It is mostly optimized for question answering.\n", "\n", - "**NOTE: this agent calls the Python agent under the hood, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.**" + "**NOTE: this agent calls the `Python` agent under the hood, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.**" ] }, { @@ -42,9 +42,9 @@ "id": "a62858e2", "metadata": {}, "source": [ - "## Using ZERO_SHOT_REACT_DESCRIPTION\n", + "## Using `ZERO_SHOT_REACT_DESCRIPTION`\n", "\n", - "This shows how to initialize the agent using the ZERO_SHOT_REACT_DESCRIPTION agent type. Note that this is an alternative to the above." + "This shows how to initialize the agent using the `ZERO_SHOT_REACT_DESCRIPTION` agent type. Note that this is an alternative to the above." ] }, { @@ -212,7 +212,7 @@ "id": "c4bc0584", "metadata": {}, "source": [ - "### Multi DataFrame Example\n", + "## Multi DataFrame Example\n", "\n", "This next part shows how the agent can interact with multiple dataframes passed in as a list." ] @@ -292,7 +292,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/playwright.ipynb b/docs/extras/integrations/toolkits/playwright.ipynb index 50d2825da9d..ccf569506b7 100644 --- a/docs/extras/integrations/toolkits/playwright.ipynb +++ b/docs/extras/integrations/toolkits/playwright.ipynb @@ -4,17 +4,19 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# PlayWright Browser Toolkit\n", + "# PlayWright Browser\n", "\n", - "This toolkit is used to interact with the browser. While other tools (like the Requests tools) are fine for static sites, Browser toolkits let your agent navigate the web and interact with dynamically rendered sites. Some tools bundled within the Browser toolkit include:\n", + "This toolkit is used to interact with the browser. While other tools (like the `Requests` tools) are fine for static sites, `PlayWright Browser` toolkits let your agent navigate the web and interact with dynamically rendered sites. \n", "\n", - "- NavigateTool (navigate_browser) - navigate to a URL\n", - "- NavigateBackTool (previous_page) - wait for an element to appear\n", - "- ClickTool (click_element) - click on an element (specified by selector)\n", - "- ExtractTextTool (extract_text) - use beautiful soup to extract text from the current web page\n", - "- ExtractHyperlinksTool (extract_hyperlinks) - use beautiful soup to extract hyperlinks from the current web page\n", - "- GetElementsTool (get_elements) - select elements by CSS selector\n", - "- CurrentPageTool (current_page) - get the current page URL\n" + "Some tools bundled within the `PlayWright Browser` toolkit include:\n", + "\n", + "- `NavigateTool` (navigate_browser) - navigate to a URL\n", + "- `NavigateBackTool` (previous_page) - wait for an element to appear\n", + "- `ClickTool` (click_element) - click on an element (specified by selector)\n", + "- `ExtractTextTool` (extract_text) - use beautiful soup to extract text from the current web page\n", + "- `ExtractHyperlinksTool` (extract_hyperlinks) - use beautiful soup to extract hyperlinks from the current web page\n", + "- `GetElementsTool` (get_elements) - select elements by CSS selector\n", + "- `CurrentPageTool` (current_page) - get the current page URL\n" ] }, { @@ -327,7 +329,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.2" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/powerbi.ipynb b/docs/extras/integrations/toolkits/powerbi.ipynb index 8ca60a9654e..475e66e612f 100644 --- a/docs/extras/integrations/toolkits/powerbi.ipynb +++ b/docs/extras/integrations/toolkits/powerbi.ipynb @@ -2,36 +2,40 @@ "cells": [ { "cell_type": "markdown", + "id": "9363398d", + "metadata": {}, "source": [ - "# PowerBI Dataset Agent\n", + "# PowerBI Dataset\n", "\n", - "This notebook showcases an agent designed to interact with a Power BI Dataset. The agent is designed to answer more general questions about a dataset, as well as recover from errors.\n", + "This notebook showcases an agent interacting with a `Power BI Dataset`. The agent is answering more general questions about a dataset, as well as recover from errors.\n", "\n", "Note that, as this agent is in active development, all answers might not be correct. It runs against the [executequery endpoint](https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/execute-queries), which does not allow deletes.\n", "\n", - "### Some notes\n", + "### Notes:\n", "- It relies on authentication with the azure.identity package, which can be installed with `pip install azure-identity`. Alternatively you can create the powerbi dataset with a token as a string without supplying the credentials.\n", "- You can also supply a username to impersonate for use with datasets that have RLS enabled. \n", "- The toolkit uses a LLM to create the query from the question, the agent uses the LLM for the overall execution.\n", "- Testing was done mostly with a `text-davinci-003` model, codex models did not seem to perform ver well." - ], - "metadata": {}, - "attachments": {}, - "id": "9363398d" + ] }, { "cell_type": "markdown", - "source": [ - "## Initialization" - ], + "id": "0725445e", "metadata": { "tags": [] }, - "id": "0725445e" + "source": [ + "## Initialization" + ] }, { "cell_type": "code", "execution_count": null, + "id": "c82f33e9", + "metadata": { + "tags": [] + }, + "outputs": [], "source": [ "from langchain.agents.agent_toolkits import create_pbi_agent\n", "from langchain.agents.agent_toolkits import PowerBIToolkit\n", @@ -39,16 +43,16 @@ "from langchain.chat_models import ChatOpenAI\n", "from langchain.agents import AgentExecutor\n", "from azure.identity import DefaultAzureCredential" - ], - "outputs": [], - "metadata": { - "tags": [] - }, - "id": "c82f33e9" + ] }, { "cell_type": "code", "execution_count": null, + "id": "0b2c5853", + "metadata": { + "tags": [] + }, + "outputs": [], "source": [ "fast_llm = ChatOpenAI(\n", " temperature=0.5, max_tokens=1000, model_name=\"gpt-3.5-turbo\", verbose=True\n", @@ -69,99 +73,95 @@ " toolkit=toolkit,\n", " verbose=True,\n", ")" - ], - "outputs": [], - "metadata": { - "tags": [] - }, - "id": "0b2c5853" + ] }, { "cell_type": "markdown", + "id": "80c92be3", + "metadata": {}, "source": [ "## Example: describing a table" - ], - "metadata": {}, - "id": "80c92be3" + ] }, { "cell_type": "code", "execution_count": null, - "source": [ - "agent_executor.run(\"Describe table1\")" - ], - "outputs": [], + "id": "90f236cb", "metadata": { "tags": [] }, - "id": "90f236cb" + "outputs": [], + "source": [ + "agent_executor.run(\"Describe table1\")" + ] }, { "cell_type": "markdown", + "id": "b464930f", + "metadata": {}, "source": [ "## Example: simple query on a table\n", "In this example, the agent actually figures out the correct query to get a row count of the table." - ], - "metadata": {}, - "attachments": {}, - "id": "b464930f" + ] }, { "cell_type": "code", "execution_count": null, + "id": "b668c907", + "metadata": { + "tags": [] + }, + "outputs": [], "source": [ "agent_executor.run(\"How many records are in table1?\")" - ], - "outputs": [], - "metadata": { - "tags": [] - }, - "id": "b668c907" + ] }, { "cell_type": "markdown", + "id": "f2229a2f", + "metadata": {}, "source": [ "## Example: running queries" - ], - "metadata": {}, - "id": "f2229a2f" + ] }, { "cell_type": "code", "execution_count": null, + "id": "865a420f", + "metadata": { + "tags": [] + }, + "outputs": [], "source": [ "agent_executor.run(\"How many records are there by dimension1 in table2?\")" - ], - "outputs": [], - "metadata": { - "tags": [] - }, - "id": "865a420f" + ] }, { "cell_type": "code", "execution_count": null, - "source": [ - "agent_executor.run(\"What unique values are there for dimensions2 in table2\")" - ], - "outputs": [], + "id": "120cd49a", "metadata": { "tags": [] }, - "id": "120cd49a" + "outputs": [], + "source": [ + "agent_executor.run(\"What unique values are there for dimensions2 in table2\")" + ] }, { "cell_type": "markdown", + "id": "ac584fb2", + "metadata": {}, "source": [ "## Example: add your own few-shot prompts" - ], - "metadata": {}, - "attachments": {}, - "id": "ac584fb2" + ] }, { "cell_type": "code", "execution_count": null, + "id": "ffa66827", + "metadata": {}, + "outputs": [], "source": [ "# fictional example\n", "few_shots = \"\"\"\n", @@ -189,26 +189,27 @@ " toolkit=toolkit,\n", " verbose=True,\n", ")" - ], - "outputs": [], - "metadata": {}, - "id": "ffa66827" + ] }, { "cell_type": "code", "execution_count": null, + "id": "3be44685", + "metadata": {}, + "outputs": [], "source": [ "agent_executor.run(\"What was the maximum of value in revenue in dollars in 2022?\")" - ], - "outputs": [], - "metadata": {}, - "id": "3be44685" + ] } ], "metadata": { + "interpreter": { + "hash": "397704579725e15f5c7cb49fe5f0341eb7531c82d19f2c29d197e8b64ab5776b" + }, "kernelspec": { - "name": "python3", - "display_name": "Python 3.9.16 64-bit" + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" }, "language_info": { "codemirror_mode": { @@ -220,12 +221,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.16" - }, - "interpreter": { - "hash": "397704579725e15f5c7cb49fe5f0341eb7531c82d19f2c29d197e8b64ab5776b" + "version": "3.10.12" } }, "nbformat": 4, "nbformat_minor": 5 -} \ No newline at end of file +} diff --git a/docs/extras/integrations/toolkits/python.ipynb b/docs/extras/integrations/toolkits/python.ipynb index 41faeff3f9d..3c1f6b50c46 100644 --- a/docs/extras/integrations/toolkits/python.ipynb +++ b/docs/extras/integrations/toolkits/python.ipynb @@ -5,9 +5,9 @@ "id": "82a4c2cc-20ea-4b20-a565-63e905dee8ff", "metadata": {}, "source": [ - "# Python Agent\n", + "# Python\n", "\n", - "This notebook showcases an agent designed to write and execute python code to answer a question." + "This notebook showcases an agent designed to write and execute `Python` code to answer a question." ] }, { @@ -32,7 +32,7 @@ "id": "ca30d64c", "metadata": {}, "source": [ - "## Using ZERO_SHOT_REACT_DESCRIPTION\n", + "## Using `ZERO_SHOT_REACT_DESCRIPTION`\n", "\n", "This shows how to initialize the agent using the ZERO_SHOT_REACT_DESCRIPTION agent type." ] @@ -149,9 +149,7 @@ "cell_type": "code", "execution_count": 5, "id": "4b9f60e7-eb6a-4f14-8604-498d863d4482", - "metadata": { - "scrolled": false - }, + "metadata": {}, "outputs": [ { "name": "stdout", @@ -271,7 +269,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.3" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/spark.ipynb b/docs/extras/integrations/toolkits/spark.ipynb index 7cab26251d1..d55075c2b00 100644 --- a/docs/extras/integrations/toolkits/spark.ipynb +++ b/docs/extras/integrations/toolkits/spark.ipynb @@ -1,13 +1,12 @@ { "cells": [ { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ - "# Spark Dataframe Agent\n", + "# Spark Dataframe\n", "\n", - "This notebook shows how to use agents to interact with a Spark dataframe and Spark Connect. It is mostly optimized for question answering.\n", + "This notebook shows how to use agents to interact with a `Spark DataFrame` and `Spark Connect`. It is mostly optimized for question answering.\n", "\n", "**NOTE: this agent calls the Python agent under the hood, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.**" ] @@ -23,6 +22,13 @@ "os.environ[\"OPENAI_API_KEY\"] = \"...input your openai api key here...\"" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## `Spark DataFrame` example" + ] + }, { "cell_type": "code", "execution_count": 2, @@ -225,11 +231,10 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ - "## Spark Connect Example" + "## `Spark Connect` example" ] }, { @@ -405,9 +410,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/docs/extras/integrations/toolkits/spark_sql.ipynb b/docs/extras/integrations/toolkits/spark_sql.ipynb index c29f6841c99..7ed93552cb6 100644 --- a/docs/extras/integrations/toolkits/spark_sql.ipynb +++ b/docs/extras/integrations/toolkits/spark_sql.ipynb @@ -4,9 +4,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Spark SQL Agent\n", + "# Spark SQL\n", "\n", - "This notebook shows how to use agents to interact with a Spark SQL. Similar to [SQL Database Agent](https://python.langchain.com/docs/integrations/toolkits/sql_database), it is designed to address general inquiries about Spark SQL and facilitate error recovery.\n", + "This notebook shows how to use agents to interact with `Spark SQL`. Similar to [SQL Database Agent](https://python.langchain.com/docs/integrations/toolkits/sql_database), it is designed to address general inquiries about `Spark SQL` and facilitate error recovery.\n", "\n", "**NOTE: Note that, as this agent is in active development, all answers might not be correct. Additionally, it is not guaranteed that the agent won't perform DML statements on your Spark cluster given certain questions. Be careful running it on sensitive data!**" ] @@ -163,7 +163,9 @@ }, { "data": { - "text/plain": "'The titanic table has the following columns: PassengerId (INT), Survived (INT), Pclass (INT), Name (STRING), Sex (STRING), Age (DOUBLE), SibSp (INT), Parch (INT), Ticket (STRING), Fare (DOUBLE), Cabin (STRING), and Embarked (STRING). Here are some sample rows from the table: \\n\\n1. PassengerId: 1, Survived: 0, Pclass: 3, Name: Braund, Mr. Owen Harris, Sex: male, Age: 22.0, SibSp: 1, Parch: 0, Ticket: A/5 21171, Fare: 7.25, Cabin: None, Embarked: S\\n2. PassengerId: 2, Survived: 1, Pclass: 1, Name: Cumings, Mrs. John Bradley (Florence Briggs Thayer), Sex: female, Age: 38.0, SibSp: 1, Parch: 0, Ticket: PC 17599, Fare: 71.2833, Cabin: C85, Embarked: C\\n3. PassengerId: 3, Survived: 1, Pclass: 3, Name: Heikkinen, Miss. Laina, Sex: female, Age: 26.0, SibSp: 0, Parch: 0, Ticket: STON/O2. 3101282, Fare: 7.925, Cabin: None, Embarked: S'" + "text/plain": [ + "'The titanic table has the following columns: PassengerId (INT), Survived (INT), Pclass (INT), Name (STRING), Sex (STRING), Age (DOUBLE), SibSp (INT), Parch (INT), Ticket (STRING), Fare (DOUBLE), Cabin (STRING), and Embarked (STRING). Here are some sample rows from the table: \\n\\n1. PassengerId: 1, Survived: 0, Pclass: 3, Name: Braund, Mr. Owen Harris, Sex: male, Age: 22.0, SibSp: 1, Parch: 0, Ticket: A/5 21171, Fare: 7.25, Cabin: None, Embarked: S\\n2. PassengerId: 2, Survived: 1, Pclass: 1, Name: Cumings, Mrs. John Bradley (Florence Briggs Thayer), Sex: female, Age: 38.0, SibSp: 1, Parch: 0, Ticket: PC 17599, Fare: 71.2833, Cabin: C85, Embarked: C\\n3. PassengerId: 3, Survived: 1, Pclass: 3, Name: Heikkinen, Miss. Laina, Sex: female, Age: 26.0, SibSp: 0, Parch: 0, Ticket: STON/O2. 3101282, Fare: 7.925, Cabin: None, Embarked: S'" + ] }, "execution_count": 4, "metadata": {}, @@ -239,7 +241,9 @@ }, { "data": { - "text/plain": "'The square root of the average age is approximately 5.45.'" + "text/plain": [ + "'The square root of the average age is approximately 5.45.'" + ] }, "execution_count": 5, "metadata": {}, @@ -253,6 +257,12 @@ { "cell_type": "code", "execution_count": 6, + "metadata": { + "collapsed": false, + "jupyter": { + "outputs_hidden": false + } + }, "outputs": [ { "name": "stdout", @@ -305,7 +315,9 @@ }, { "data": { - "text/plain": "'The oldest survived passenger is Barkworth, Mr. Algernon Henry Wilson, who was 80 years old.'" + "text/plain": [ + "'The oldest survived passenger is Barkworth, Mr. Algernon Henry Wilson, who was 80 years old.'" + ] }, "execution_count": 6, "metadata": {}, @@ -314,10 +326,7 @@ ], "source": [ "agent_executor.run(\"What's the name of the oldest survived passenger?\")" - ], - "metadata": { - "collapsed": false - } + ] } ], "metadata": { @@ -336,9 +345,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.11.2" + "version": "3.10.12" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 } diff --git a/docs/extras/integrations/toolkits/sql_database.ipynb b/docs/extras/integrations/toolkits/sql_database.ipynb index 9fbc31da236..eae793da1ac 100644 --- a/docs/extras/integrations/toolkits/sql_database.ipynb +++ b/docs/extras/integrations/toolkits/sql_database.ipynb @@ -1,22 +1,21 @@ { "cells": [ { - "attachments": {}, "cell_type": "markdown", "id": "0e499e90-7a6d-4fab-8aab-31a4df417601", "metadata": {}, "source": [ - "# SQL Database Agent\n", + "# SQL Database\n", "\n", - "This notebook showcases an agent designed to interact with a sql databases. The agent builds off of [SQLDatabaseChain](https://python.langchain.com/docs/use_cases/tabular/sqlite) and is designed to answer more general questions about a database, as well as recover from errors.\n", + "This notebook showcases an agent designed to interact with a `SQL` databases. \n", + "The agent builds off of [SQLDatabaseChain](https://python.langchain.com/docs/use_cases/tabular/sqlite) and is designed to answer more general questions about a database, as well as recover from errors.\n", "\n", "Note that, as this agent is in active development, all answers might not be correct. Additionally, it is not guaranteed that the agent won't perform DML statements on your database given certain questions. Be careful running it on sensitive data!\n", "\n", - "This uses the example Chinook database. To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository." + "This uses the example `Chinook` database. To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository." ] }, { - "attachments": {}, "cell_type": "markdown", "id": "ec927ac6-9b2a-4e8a-9a6e-3e429191875c", "metadata": { @@ -56,12 +55,11 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "f74d1792", "metadata": {}, "source": [ - "## Using ZERO_SHOT_REACT_DESCRIPTION\n", + "## Using `ZERO_SHOT_REACT_DESCRIPTION`\n", "\n", "This shows how to initialize the agent using the ZERO_SHOT_REACT_DESCRIPTION agent type." ] @@ -84,7 +82,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "971cc455", "metadata": {}, @@ -110,7 +107,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "54c01168", "metadata": {}, @@ -136,7 +132,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "5a4a9455", "metadata": {}, @@ -147,7 +142,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "36ae48c7-cb08-4fef-977e-c7d4b96a464b", "metadata": {}, @@ -237,7 +231,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "9abcfe8e-1868-42a4-8345-ad2d9b44c681", "metadata": {}, @@ -312,7 +305,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "6fbc26af-97e4-4a21-82aa-48bdc992da26", "metadata": {}, @@ -495,7 +487,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "7c7503b5-d9d9-4faa-b064-29fcdb5ff213", "metadata": {}, @@ -639,7 +630,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/vectorstore.ipynb b/docs/extras/integrations/toolkits/vectorstore.ipynb index 69ac05bd5f2..db388fdb077 100644 --- a/docs/extras/integrations/toolkits/vectorstore.ipynb +++ b/docs/extras/integrations/toolkits/vectorstore.ipynb @@ -1,23 +1,21 @@ { "cells": [ { - "attachments": {}, "cell_type": "markdown", "id": "18ada398-dce6-4049-9b56-fc0ede63da9c", "metadata": {}, "source": [ - "# Vectorstore Agent\n", + "# Vectorstore\n", "\n", "This notebook showcases an agent designed to retrieve information from one or more vectorstores, either with or without sources." ] }, { - "attachments": {}, "cell_type": "markdown", "id": "eecb683b-3a46-4b9d-81a3-7caefbfec1a1", "metadata": {}, "source": [ - "## Create the Vectorstores" + "## Create Vectorstores" ] }, { @@ -95,7 +93,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "f4814175-964d-42f1-aa9d-22801ce1e912", "metadata": {}, @@ -128,7 +125,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "8a38ad10", "metadata": {}, @@ -217,7 +213,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "7ca07707", "metadata": {}, @@ -263,7 +258,6 @@ ] }, { - "attachments": {}, "cell_type": "markdown", "id": "71680984-edaf-4a63-90f5-94edbd263550", "metadata": {}, @@ -422,7 +416,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.1" + "version": "3.10.12" } }, "nbformat": 4, diff --git a/docs/extras/integrations/toolkits/xorbits.ipynb b/docs/extras/integrations/toolkits/xorbits.ipynb index dd3e6a108a5..c97ca83b66b 100644 --- a/docs/extras/integrations/toolkits/xorbits.ipynb +++ b/docs/extras/integrations/toolkits/xorbits.ipynb @@ -4,7 +4,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Xorbits Agent" + "# Xorbits" ] }, { @@ -13,7 +13,7 @@ "source": [ "This notebook shows how to use agents to interact with [Xorbits Pandas](https://doc.xorbits.io/en/latest/reference/pandas/index.html) dataframe and [Xorbits Numpy](https://doc.xorbits.io/en/latest/reference/numpy/index.html) ndarray. It is mostly optimized for question answering.\n", "\n", - "**NOTE: this agent calls the Python agent under the hood, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.**" + "**NOTE: this agent calls the `Python` agent under the hood, which executes LLM generated Python code - this can be bad if the LLM generated Python code is harmful. Use cautiously.**" ] }, { @@ -734,9 +734,9 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.9.13" + "version": "3.10.12" } }, "nbformat": 4, - "nbformat_minor": 2 + "nbformat_minor": 4 }