mirror of
https://github.com/hwchase17/langchain.git
synced 2026-02-08 18:19:21 +00:00
Compare commits
15 Commits
harrison/r
...
harrison/p
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3afce9e421 | ||
|
|
72d4709772 | ||
|
|
e7c34a1f9a | ||
|
|
652f0fcb07 | ||
|
|
57449837c0 | ||
|
|
d7696ec341 | ||
|
|
4f89a49847 | ||
|
|
39390abc22 | ||
|
|
7055471721 | ||
|
|
a897afb965 | ||
|
|
fab49f41cf | ||
|
|
f154c885aa | ||
|
|
43a601a3b8 | ||
|
|
39c5ff56f2 | ||
|
|
a8c1705c2f |
12
.github/actions/poetry_setup/action.yml
vendored
12
.github/actions/poetry_setup/action.yml
vendored
@@ -33,13 +33,11 @@ runs:
|
||||
using: composite
|
||||
steps:
|
||||
- uses: actions/setup-python@v4
|
||||
name: Setup python $${ inputs.python-version }}
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
|
||||
- uses: actions/cache@v3
|
||||
id: cache-pip
|
||||
name: Cache Pip ${{ inputs.python-version }}
|
||||
env:
|
||||
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "15"
|
||||
with:
|
||||
@@ -50,16 +48,6 @@ runs:
|
||||
- run: pipx install poetry==${{ inputs.poetry-version }} --python python${{ inputs.python-version }}
|
||||
shell: bash
|
||||
|
||||
- name: Check Poetry File
|
||||
shell: bash
|
||||
run: |
|
||||
poetry check
|
||||
|
||||
- name: Check lock file
|
||||
shell: bash
|
||||
run: |
|
||||
poetry lock --check
|
||||
|
||||
- uses: actions/cache@v3
|
||||
id: cache-poetry
|
||||
env:
|
||||
|
||||
2
.github/workflows/linkcheck.yml
vendored
2
.github/workflows/linkcheck.yml
vendored
@@ -4,8 +4,6 @@ on:
|
||||
push:
|
||||
branches: [master]
|
||||
pull_request:
|
||||
paths:
|
||||
- 'docs/**'
|
||||
|
||||
env:
|
||||
POETRY_VERSION: "1.4.2"
|
||||
|
||||
10
Makefile
10
Makefile
@@ -35,13 +35,13 @@ lint lint_diff:
|
||||
TEST_FILE ?= tests/unit_tests/
|
||||
|
||||
test:
|
||||
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
|
||||
poetry run pytest $(TEST_FILE)
|
||||
|
||||
tests:
|
||||
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
|
||||
tests:
|
||||
poetry run pytest $(TEST_FILE)
|
||||
|
||||
extended_tests:
|
||||
poetry run pytest --disable-socket --allow-unix-socket --only-extended tests/unit_tests
|
||||
poetry run pytest --only-extended tests/unit_tests
|
||||
|
||||
test_watch:
|
||||
poetry run ptw --now . -- tests/unit_tests
|
||||
@@ -62,7 +62,7 @@ help:
|
||||
@echo 'format - run code formatters'
|
||||
@echo 'lint - run linters'
|
||||
@echo 'test - run unit tests'
|
||||
@echo 'tests - run unit tests'
|
||||
@echo 'test - run unit tests'
|
||||
@echo 'test TEST_FILE=<test_file> - run all tests in file'
|
||||
@echo 'extended_tests - run only extended unit tests'
|
||||
@echo 'test_watch - run unit tests in watch mode'
|
||||
|
||||
@@ -1,90 +0,0 @@
|
||||
# YouTube
|
||||
|
||||
This is a collection of `LangChain` videos on `YouTube`.
|
||||
|
||||
### ⛓️[Official LangChain YouTube channel](https://www.youtube.com/@LangChain)⛓️
|
||||
|
||||
### Introduction to LangChain with Harrison Chase, creator of LangChain
|
||||
- [Building the Future with LLMs, `LangChain`, & `Pinecone`](https://youtu.be/nMniwlGyX-c) by [Pinecone](https://www.youtube.com/@pinecone-io)
|
||||
- [LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36](https://youtu.be/lhby7Ql7hbk) by [Weaviate • Vector Database](https://www.youtube.com/@Weaviate)
|
||||
- [LangChain Demo + Q&A with Harrison Chase](https://youtu.be/zaYTXQFR0_s?t=788) by [Full Stack Deep Learning](https://www.youtube.com/@FullStackDeepLearning)
|
||||
- [LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin)](https://youtu.be/gVkF8cwfBLI) by [Chat with data](https://www.youtube.com/@chatwithdata)
|
||||
- ⛓️ [LangChain "Agents in Production" Webinar](https://youtu.be/k8GNCCs16F4) by [LangChain](https://www.youtube.com/@LangChain)
|
||||
|
||||
## Videos (sorted by views)
|
||||
|
||||
- [Building AI LLM Apps with LangChain (and more?) - LIVE STREAM](https://www.youtube.com/live/M-2Cj_2fzWI?feature=share) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
|
||||
- [First look - `ChatGPT` + `WolframAlpha` (`GPT-3.5` and Wolfram|Alpha via LangChain by James Weaver)](https://youtu.be/wYGbY811oMo) by [Dr Alan D. Thompson](https://www.youtube.com/@DrAlanDThompson)
|
||||
- [LangChain explained - The hottest new Python framework](https://youtu.be/RoR4XJw8wIc) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
|
||||
- [Chatbot with INFINITE MEMORY using `OpenAI` & `Pinecone` - `GPT-3`, `Embeddings`, `ADA`, `Vector DB`, `Semantic`](https://youtu.be/2xNzB7xq8nk) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
|
||||
- [LangChain for LLMs is... basically just an Ansible playbook](https://youtu.be/X51N9C-OhlE) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
|
||||
- [Build your own LLM Apps with LangChain & `GPT-Index`](https://youtu.be/-75p09zFUJY) by [1littlecoder](https://www.youtube.com/@1littlecoder)
|
||||
- [`BabyAGI` - New System of Autonomous AI Agents with LangChain](https://youtu.be/lg3kJvf1kXo) by [1littlecoder](https://www.youtube.com/@1littlecoder)
|
||||
- [Run `BabyAGI` with Langchain Agents (with Python Code)](https://youtu.be/WosPGHPObx8) by [1littlecoder](https://www.youtube.com/@1littlecoder)
|
||||
- [How to Use Langchain With `Zapier` | Write and Send Email with GPT-3 | OpenAI API Tutorial](https://youtu.be/p9v2-xEa9A0) by [StarMorph AI](https://www.youtube.com/@starmorph)
|
||||
- [Use Your Locally Stored Files To Get Response From GPT - `OpenAI` | Langchain | Python](https://youtu.be/NC1Ni9KS-rk) by [Shweta Lodha](https://www.youtube.com/@shweta-lodha)
|
||||
- [`Langchain JS` | How to Use GPT-3, GPT-4 to Reference your own Data | `OpenAI Embeddings` Intro](https://youtu.be/veV2I-NEjaM) by [StarMorph AI](https://www.youtube.com/@starmorph)
|
||||
- [The easiest way to work with large language models | Learn LangChain in 10min](https://youtu.be/kmbS6FDQh7c) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
|
||||
- [4 Autonomous AI Agents: “Westworld” simulation `BabyAGI`, `AutoGPT`, `Camel`, `LangChain`](https://youtu.be/yWbnH6inT_U) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
|
||||
- [AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT](https://youtu.be/J-GL0htqda8) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
|
||||
- [Query Your Data with GPT-4 | Embeddings, Vector Databases | Langchain JS Knowledgebase](https://youtu.be/jRnUPUTkZmU) by [StarMorph AI](https://www.youtube.com/@starmorph)
|
||||
- [`Weaviate` + LangChain for LLM apps presented by Erika Cardenas](https://youtu.be/7AGj4Td5Lgw) by [`Weaviate` • Vector Database](https://www.youtube.com/@Weaviate)
|
||||
- [Langchain Overview — How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
|
||||
- [Langchain Overview - How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
|
||||
- [Custom langchain Agent & Tools with memory. Turn any `Python function` into langchain tool with Gpt 3](https://youtu.be/NIG8lXk0ULg) by [echohive](https://www.youtube.com/@echohive)
|
||||
- [LangChain: Run Language Models Locally - `Hugging Face Models`](https://youtu.be/Xxxuw4_iCzw) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
|
||||
- [`ChatGPT` with any `YouTube` video using langchain and `chromadb`](https://youtu.be/TQZfB2bzVwU) by [echohive](https://www.youtube.com/@echohive)
|
||||
- [How to Talk to a `PDF` using LangChain and `ChatGPT`](https://youtu.be/v2i1YDtrIwk) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
|
||||
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@merksworld)
|
||||
- [LangChain - Prompt Templates (what all the best prompt engineers use)](https://youtu.be/1aRu8b0XNOQ) by [Nick Daigler](https://www.youtube.com/@nick_daigs)
|
||||
- [LangChain. Crear aplicaciones Python impulsadas por GPT](https://youtu.be/DkW_rDndts8) by [Jesús Conde](https://www.youtube.com/@0utKast)
|
||||
- [Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial](https://youtu.be/fLy0VenZyGc) by [Rachel Woods](https://www.youtube.com/@therachelwoods)
|
||||
- [`BabyAGI` + `GPT-4` Langchain Agent with Internet Access](https://youtu.be/wx1z_hs5P6E) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
|
||||
- [Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI](https://youtu.be/mb_YAABSplk) by [Arnoldas Kemeklis](https://www.youtube.com/@processusAI)
|
||||
- [Get Started with LangChain in `Node.js`](https://youtu.be/Wxx1KUWJFv4) by [Developers Digest](https://www.youtube.com/@DevelopersDigest)
|
||||
- [LangChain + `OpenAI` tutorial: Building a Q&A system w/ own text data](https://youtu.be/DYOU_Z0hAwo) by [Samuel Chan](https://www.youtube.com/@SamuelChan)
|
||||
- [Langchain + `Zapier` Agent](https://youtu.be/yribLAb-pxA) by [Merk](https://www.youtube.com/@merksworld)
|
||||
- [Connecting the Internet with `ChatGPT` (LLMs) using Langchain And Answers Your Questions](https://youtu.be/9Y0TBC63yZg) by [Kamalraj M M](https://www.youtube.com/@insightbuilder)
|
||||
- [Build More Powerful LLM Applications for Business’s with LangChain (Beginners Guide)](https://youtu.be/sp3-WLKEcBg) by[ No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
|
||||
- ⛓️ [LangFlow LLM Agent Demo for 🦜🔗LangChain](https://youtu.be/zJxDHaWt-6o) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
|
||||
- ⛓️ [Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain](https://youtu.be/eYer3uzrcuM) by [Finxter](https://www.youtube.com/@CobusGreylingZA)
|
||||
- ⛓️ [LangChain Tutorial - ChatGPT mit eigenen Daten](https://youtu.be/0XDLyY90E2c) by [Coding Crashkurse](https://www.youtube.com/@codingcrashkurse6429)
|
||||
- ⛓️ [Chat with a `CSV` | LangChain Agents Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [GoDataProf](https://www.youtube.com/@godataprof)
|
||||
- ⛓️ [Introdução ao Langchain - #Cortes - Live DataHackers](https://youtu.be/fw8y5VRei5Y) by [Prof. João Gabriel Lima](https://www.youtube.com/@profjoaogabriellima)
|
||||
- ⛓️ [LangChain: Level up `ChatGPT` !? | LangChain Tutorial Part 1](https://youtu.be/vxUGx8aZpDE) by [Code Affinity](https://www.youtube.com/@codeaffinitydev)
|
||||
- ⛓️ [KI schreibt krasses Youtube Skript 😲😳 | LangChain Tutorial Deutsch](https://youtu.be/QpTiXyK1jus) by [SimpleKI](https://www.youtube.com/@simpleki)
|
||||
- ⛓️ [Chat with Audio: Langchain, `Chroma DB`, OpenAI, and `Assembly AI`](https://youtu.be/Kjy7cx1r75g) by [AI Anytime](https://www.youtube.com/@AIAnytime)
|
||||
- ⛓️ [QA over documents with Auto vector index selection with Langchain router chains](https://youtu.be/9G05qybShv8) by [echohive](https://www.youtube.com/@echohive)
|
||||
- ⛓️ [Build your own custom LLM application with `Bubble.io` & Langchain (No Code & Beginner friendly)](https://youtu.be/O7NhQGu1m6c) by [No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
|
||||
- ⛓️ [Simple App to Question Your Docs: Leveraging `Streamlit`, `Hugging Face Spaces`, LangChain, and `Claude`!](https://youtu.be/X4YbNECRr7o) by [Chris Alexiuk](https://www.youtube.com/@chrisalexiuk)
|
||||
- ⛓️ [LANGCHAIN AI- `ConstitutionalChainAI` + Databutton AI ASSISTANT Web App](https://youtu.be/5zIU6_rdJCU) by [Avra](https://www.youtube.com/@Avra_b)
|
||||
- ⛓️ [LANGCHAIN AI AUTONOMOUS AGENT WEB APP - 👶 `BABY AGI` 🤖 with EMAIL AUTOMATION using `DATABUTTON`](https://youtu.be/cvAwOGfeHgw) by [Avra](https://www.youtube.com/@Avra_b)
|
||||
- ⛓️ [The Future of Data Analysis: Using A.I. Models in Data Analysis (LangChain)](https://youtu.be/v_LIcVyg5dk) by [Absent Data](https://www.youtube.com/@absentdata)
|
||||
- ⛓️ [Memory in LangChain | Deep dive (python)](https://youtu.be/70lqvTFh_Yg) by [Eden Marco](https://www.youtube.com/@EdenMarco)
|
||||
- ⛓️ [9 LangChain UseCases | Beginner's Guide | 2023](https://youtu.be/zS8_qosHNMw) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
|
||||
- ⛓️ [Use Large Language Models in Jupyter Notebook | LangChain | Agents & Indexes](https://youtu.be/JSe11L1a_QQ) by [Abhinaw Tiwari](https://www.youtube.com/@AbhinawTiwariAT)
|
||||
- ⛓️ [How to Talk to Your Langchain Agent | `11 Labs` + `Whisper`](https://youtu.be/N4k459Zw2PU) by [VRSEN](https://www.youtube.com/@vrsen)
|
||||
- ⛓️ [LangChain Deep Dive: 5 FUN AI App Ideas To Build Quickly and Easily](https://youtu.be/mPYEPzLkeks) by [James NoCode](https://www.youtube.com/@jamesnocode)
|
||||
- ⛓️ [BEST OPEN Alternative to OPENAI's EMBEDDINGs for Retrieval QA: LangChain](https://youtu.be/ogEalPMUCSY) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
|
||||
- ⛓️ [LangChain 101: Models](https://youtu.be/T6c_XsyaNSQ) by [Mckay Wrigley](https://www.youtube.com/@realmckaywrigley)
|
||||
- ⛓️ [LangChain with JavaScript Tutorial #1 | Setup & Using LLMs](https://youtu.be/W3AoeMrg27o) by [Leon van Zyl](https://www.youtube.com/@leonvanzyl)
|
||||
- ⛓️ [LangChain Overview & Tutorial for Beginners: Build Powerful AI Apps Quickly & Easily (ZERO CODE)](https://youtu.be/iI84yym473Q) by [James NoCode](https://www.youtube.com/@jamesnocode)
|
||||
- ⛓️ [LangChain In Action: Real-World Use Case With Step-by-Step Tutorial](https://youtu.be/UO699Szp82M) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
|
||||
- ⛓️ [Summarizing and Querying Multiple Papers with LangChain](https://youtu.be/p_MQRWH5Y6k) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
|
||||
- ⛓️ [Using Langchain (and `Replit`) through `Tana`, ask `Google`/`Wikipedia`/`Wolfram Alpha` to fill out a table](https://youtu.be/Webau9lEzoI) by [Stian Håklev](https://www.youtube.com/@StianHaklev)
|
||||
- ⛓️ [Langchain PDF App (GUI) | Create a ChatGPT For Your `PDF` in Python](https://youtu.be/wUAUdEw5oxM) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
|
||||
- ⛓️ [Auto-GPT with LangChain 🔥 | Create Your Own Personal AI Assistant](https://youtu.be/imDfPmMKEjM) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
|
||||
- ⛓️ [Create Your OWN Slack AI Assistant with Python & LangChain](https://youtu.be/3jFXRNn2Bu8) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
|
||||
- ⛓️ [How to Create LOCAL Chatbots with GPT4All and LangChain [Full Guide]](https://youtu.be/4p1Fojur8Zw) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
|
||||
- ⛓️ [Build a `Multilingual PDF` Search App with LangChain, `Cohere` and `Bubble`](https://youtu.be/hOrtuumOrv8) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
|
||||
- ⛓️ [Building a LangChain Agent (code-free!) Using `Bubble` and `Flowise`](https://youtu.be/jDJIIVWTZDE) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
|
||||
- ⛓️ [Build a LangChain-based Semantic PDF Search App with No-Code Tools Bubble and Flowise](https://youtu.be/s33v5cIeqA4) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
|
||||
- ⛓️ [LangChain Memory Tutorial | Building a ChatGPT Clone in Python](https://youtu.be/Cwq91cj2Pnc) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
|
||||
- ⛓️ [ChatGPT For Your DATA | Chat with Multiple Documents Using LangChain](https://youtu.be/TeDgIDqQmzs) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
|
||||
- ⛓️ [`Llama Index`: Chat with Documentation using URL Loader](https://youtu.be/XJRoDEctAwA) by [Merk](https://www.youtube.com/@merksworld)
|
||||
- ⛓️ [Using OpenAI, LangChain, and `Gradio` to Build Custom GenAI Applications](https://youtu.be/1MsmqMg3yUc) by [David Hundley](https://www.youtube.com/@dkhundley)
|
||||
|
||||
|
||||
|
||||
---------------------
|
||||
⛓ icon marks a new video [last update 2023-05-15]
|
||||
@@ -1,25 +0,0 @@
|
||||
# Docugami
|
||||
|
||||
This page covers how to use [Docugami](https://docugami.com) within LangChain.
|
||||
|
||||
## What is Docugami?
|
||||
|
||||
Docugami converts business documents into a Document XML Knowledge Graph, generating forests of XML semantic trees representing entire documents. This is a rich representation that includes the semantic and structural characteristics of various chunks in the document as an XML tree.
|
||||
|
||||
## Quick start
|
||||
|
||||
1. Create a Docugami workspace: http://www.docugami.com (free trials available)
|
||||
2. Add your documents (PDF, DOCX or DOC) and allow Docugami to ingest and cluster them into sets of similar documents, e.g. NDAs, Lease Agreements, and Service Agreements. There is no fixed set of document types supported by the system, the clusters created depend on your particular documents, and you can [change the docset assignments](https://help.docugami.com/home/working-with-the-doc-sets-view) later.
|
||||
3. Create an access token via the Developer Playground for your workspace. Detailed instructions: https://help.docugami.com/home/docugami-api
|
||||
4. Explore the Docugami API at https://api-docs.docugami.com/ to get a list of your processed docset IDs, or just the document IDs for a particular docset.
|
||||
6. Use the DocugamiLoader as detailed in [this notebook](../modules/indexes/document_loaders/examples/docugami.ipynb), to get rich semantic chunks for your documents.
|
||||
7. Optionally, build and publish one or more [reports or abstracts](https://help.docugami.com/home/reports). This helps Docugami improve the semantic XML with better tags based on your preferences, which are then added to the DocugamiLoader output as metadata. Use techniques like [self-querying retriever](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/self_query_retriever.html) to do high accuracy Document QA.
|
||||
|
||||
# Advantages vs Other Chunking Techniques
|
||||
|
||||
Appropriate chunking of your documents is critical for retrieval from documents. Many chunking techniques exist, including simple ones that rely on whitespace and recursive chunk splitting based on character length. Docugami offers a different approach:
|
||||
|
||||
1. **Intelligent Chunking:** Docugami breaks down every document into a hierarchical semantic XML tree of chunks of varying sizes, from single words or numerical values to entire sections. These chunks follow the semantic contours of the document, providing a more meaningful representation than arbitrary length or simple whitespace-based chunking.
|
||||
2. **Structured Representation:** In addition, the XML tree indicates the structural contours of every document, using attributes denoting headings, paragraphs, lists, tables, and other common elements, and does that consistently across all supported document formats, such as scanned PDFs or DOCX files. It appropriately handles long-form document characteristics like page headers/footers or multi-column flows for clean text extraction.
|
||||
3. **Semantic Annotations:** Chunks are annotated with semantic tags that are coherent across the document set, facilitating consistent hierarchical queries across multiple documents, even if they are written and formatted differently. For example, in set of lease agreements, you can easily identify key provisions like the Landlord, Tenant, or Renewal Date, as well as more complex information such as the wording of any sub-lease provision or whether a specific jurisdiction has an exception section within a Termination Clause.
|
||||
4. **Additional Metadata:** Chunks are also annotated with additional metadata, if a user has been using Docugami. This additional metadata can be used for high-accuracy Document QA without context window restrictions. See detailed code walk-through in [this notebook](../modules/indexes/document_loaders/examples/docugami.ipynb).
|
||||
@@ -220,18 +220,7 @@ Open Source
|
||||
|
||||
+++
|
||||
|
||||
Answer questions about the documentation of any project
|
||||
|
||||
---
|
||||
|
||||
.. link-button:: https://github.com/akshata29/chatpdf
|
||||
:type: url
|
||||
:text: Chat & Ask your data
|
||||
:classes: stretched-link btn-lg
|
||||
|
||||
+++
|
||||
|
||||
This sample demonstrates a few approaches for creating ChatGPT-like experiences over your own data. It uses OpenAI / Azure OpenAI Service to access the ChatGPT model (gpt-35-turbo and gpt3), and vector store (Pinecone, Redis and others) or Azure cognitive search for data indexing and retrieval.
|
||||
Answer questions about the documentation of any project
|
||||
|
||||
Misc. Colab Notebooks
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
@@ -1,106 +0,0 @@
|
||||
# Tutorials
|
||||
|
||||
This is a collection of `LangChain` tutorials on `YouTube`.
|
||||
|
||||
⛓ icon marks a new video [last update 2023-05-15]
|
||||
|
||||
|
||||
|
||||
[LangChain Crash Course: Build an AutoGPT app in 25 minutes](https://youtu.be/MlK6SIjcjE8) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
|
||||
|
||||
|
||||
[LangChain Crash Course - Build apps with language models](https://youtu.be/LbT1yp6quS8) by [Patrick Loeber](https://www.youtube.com/@patloeber)
|
||||
|
||||
|
||||
[LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners](https://youtu.be/aywZrzNaKjs) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
|
||||
|
||||
|
||||
###
|
||||
[LangChain for Gen AI and LLMs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F) by [James Briggs](https://www.youtube.com/@jamesbriggs):
|
||||
- #1 [Getting Started with `GPT-3` vs. Open Source LLMs](https://youtu.be/nE2skSRWTTs)
|
||||
- #2 [Prompt Templates for `GPT 3.5` and other LLMs](https://youtu.be/RflBcK0oDH0)
|
||||
- #3 [LLM Chains using `GPT 3.5` and other LLMs](https://youtu.be/S8j9Tk0lZHU)
|
||||
- #4 [Chatbot Memory for `Chat-GPT`, `Davinci` + other LLMs](https://youtu.be/X05uK0TZozM)
|
||||
- #5 [Chat with OpenAI in LangChain](https://youtu.be/CnAgB3A5OlU)
|
||||
- ⛓ #6 [Fixing LLM Hallucinations with Retrieval Augmentation in LangChain](https://youtu.be/kvdVduIJsc8)
|
||||
- ⛓ #7 [LangChain Agents Deep Dive with GPT 3.5](https://youtu.be/jSP-gSEyVeI)
|
||||
- ⛓ #8 [Create Custom Tools for Chatbots in LangChain](https://youtu.be/q-HNphrWsDE)
|
||||
- ⛓ #9 [Build Conversational Agents with Vector DBs](https://youtu.be/H6bCqqw9xyI)
|
||||
|
||||
|
||||
###
|
||||
[LangChain 101](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5) by [Data Independent](https://www.youtube.com/@DataIndependent):
|
||||
- [What Is LangChain? - LangChain + `ChatGPT` Overview](https://youtu.be/_v_fgW2SkkQ)
|
||||
- [Quickstart Guide](https://youtu.be/kYRB-vJFy38)
|
||||
- [Beginner Guide To 7 Essential Concepts](https://youtu.be/2xxziIWmaSA)
|
||||
- [`OpenAI` + `Wolfram Alpha`](https://youtu.be/UijbzCIJ99g)
|
||||
- [Ask Questions On Your Custom (or Private) Files](https://youtu.be/EnT-ZTrcPrg)
|
||||
- [Connect `Google Drive Files` To `OpenAI`](https://youtu.be/IqqHqDcXLww)
|
||||
- [`YouTube Transcripts` + `OpenAI`](https://youtu.be/pNcQ5XXMgH4)
|
||||
- [Question A 300 Page Book (w/ `OpenAI` + `Pinecone`)](https://youtu.be/h0DHDp1FbmQ)
|
||||
- [Workaround `OpenAI's` Token Limit With Chain Types](https://youtu.be/f9_BWhCI4Zo)
|
||||
- [Build Your Own OpenAI + LangChain Web App in 23 Minutes](https://youtu.be/U_eV8wfMkXU)
|
||||
- [Working With The New `ChatGPT API`](https://youtu.be/e9P7FLi5Zy8)
|
||||
- [OpenAI + LangChain Wrote Me 100 Custom Sales Emails](https://youtu.be/y1pyAQM-3Bo)
|
||||
- [Structured Output From `OpenAI` (Clean Dirty Data)](https://youtu.be/KwAXfey-xQk)
|
||||
- [Connect `OpenAI` To +5,000 Tools (LangChain + `Zapier`)](https://youtu.be/7tNm0yiDigU)
|
||||
- [Use LLMs To Extract Data From Text (Expert Mode)](https://youtu.be/xZzvwR9jdPA)
|
||||
- ⛓ [Extract Insights From Interview Transcripts Using LLMs](https://youtu.be/shkMOHwJ4SM)
|
||||
- ⛓ [5 Levels Of LLM Summarizing: Novice to Expert](https://youtu.be/qaPMdcCqtWk)
|
||||
|
||||
|
||||
###
|
||||
[LangChain How to and guides](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ) by [Sam Witteveen](https://www.youtube.com/@samwitteveenai):
|
||||
- [LangChain Basics - LLMs & PromptTemplates with Colab](https://youtu.be/J_0qvRt4LNk)
|
||||
- [LangChain Basics - Tools and Chains](https://youtu.be/hI2BY7yl_Ac)
|
||||
- [`ChatGPT API` Announcement & Code Walkthrough with LangChain](https://youtu.be/phHqvLHCwH4)
|
||||
- [Conversations with Memory (explanation & code walkthrough)](https://youtu.be/X550Zbz_ROE)
|
||||
- [Chat with `Flan20B`](https://youtu.be/VW5LBavIfY4)
|
||||
- [Using `Hugging Face Models` locally (code walkthrough)](https://youtu.be/Kn7SX2Mx_Jk)
|
||||
- [`PAL` : Program-aided Language Models with LangChain code](https://youtu.be/dy7-LvDu-3s)
|
||||
- [Building a Summarization System with LangChain and `GPT-3` - Part 1](https://youtu.be/LNq_2s_H01Y)
|
||||
- [Building a Summarization System with LangChain and `GPT-3` - Part 2](https://youtu.be/d-yeHDLgKHw)
|
||||
- [Microsoft's `Visual ChatGPT` using LangChain](https://youtu.be/7YEiEyfPF5U)
|
||||
- [LangChain Agents - Joining Tools and Chains with Decisions](https://youtu.be/ziu87EXZVUE)
|
||||
- [Comparing LLMs with LangChain](https://youtu.be/rFNG0MIEuW0)
|
||||
- [Using `Constitutional AI` in LangChain](https://youtu.be/uoVqNFDwpX4)
|
||||
- [Talking to `Alpaca` with LangChain - Creating an Alpaca Chatbot](https://youtu.be/v6sF8Ed3nTE)
|
||||
- [Talk to your `CSV` & `Excel` with LangChain](https://youtu.be/xQ3mZhw69bc)
|
||||
- [`BabyAGI`: Discover the Power of Task-Driven Autonomous Agents!](https://youtu.be/QBcDLSE2ERA)
|
||||
- [Improve your `BabyAGI` with LangChain](https://youtu.be/DRgPyOXZ-oE)
|
||||
- ⛓ [Master `PDF` Chat with LangChain - Your essential guide to queries on documents](https://youtu.be/ZzgUqFtxgXI)
|
||||
- ⛓ [Using LangChain with `DuckDuckGO` `Wikipedia` & `PythonREPL` Tools](https://youtu.be/KerHlb8nuVc)
|
||||
- ⛓ [Building Custom Tools and Agents with LangChain (gpt-3.5-turbo)](https://youtu.be/biS8G8x8DdA)
|
||||
- ⛓ [LangChain Retrieval QA Over Multiple Files with `ChromaDB`](https://youtu.be/3yPBVii7Ct0)
|
||||
- ⛓ [LangChain Retrieval QA with Instructor Embeddings & `ChromaDB` for PDFs](https://youtu.be/cFCGUjc33aU)
|
||||
- ⛓ [LangChain + Retrieval Local LLMs for Retrieval QA - No OpenAI!!!](https://youtu.be/9ISVjh8mdlA)
|
||||
|
||||
|
||||
###
|
||||
[LangChain](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr) by [Prompt Engineering](https://www.youtube.com/@engineerprompt):
|
||||
- [LangChain Crash Course — All You Need to Know to Build Powerful Apps with LLMs](https://youtu.be/5-fc4Tlgmro)
|
||||
- [Working with MULTIPLE `PDF` Files in LangChain: `ChatGPT` for your Data](https://youtu.be/s5LhRdh5fu4)
|
||||
- [`ChatGPT` for YOUR OWN `PDF` files with LangChain](https://youtu.be/TLf90ipMzfE)
|
||||
- [Talk to YOUR DATA without OpenAI APIs: LangChain](https://youtu.be/wrD-fZvT6UI)
|
||||
- ⛓️ [CHATGPT For WEBSITES: Custom ChatBOT](https://youtu.be/RBnuhhmD21U)
|
||||
|
||||
|
||||
###
|
||||
LangChain by [Chat with data](https://www.youtube.com/@chatwithdata)
|
||||
- [LangChain Beginner's Tutorial for `Typescript`/`Javascript`](https://youtu.be/bH722QgRlhQ)
|
||||
- [`GPT-4` Tutorial: How to Chat With Multiple `PDF` Files (~1000 pages of Tesla's 10-K Annual Reports)](https://youtu.be/Ix9WIZpArm0)
|
||||
- [`GPT-4` & LangChain Tutorial: How to Chat With A 56-Page `PDF` Document (w/`Pinecone`)](https://youtu.be/ih9PBGVVOO4)
|
||||
- ⛓ [LangChain & Supabase Tutorial: How to Build a ChatGPT Chatbot For Your Website](https://youtu.be/R2FMzcsmQY8)
|
||||
|
||||
|
||||
###
|
||||
[Get SH\*T Done with Prompt Engineering and LangChain](https://www.youtube.com/watch?v=muXbPpG_ys4&list=PLEJK-H61Xlwzm5FYLDdKt_6yibO33zoMW) by [Venelin Valkov](https://www.youtube.com/@venelin_valkov)
|
||||
- [Getting Started with LangChain: Load Custom Data, Run OpenAI Models, Embeddings and `ChatGPT`](https://www.youtube.com/watch?v=muXbPpG_ys4)
|
||||
- [Loaders, Indexes & Vectorstores in LangChain: Question Answering on `PDF` files with `ChatGPT`](https://www.youtube.com/watch?v=FQnvfR8Dmr0)
|
||||
- [LangChain Models: `ChatGPT`, `Flan Alpaca`, `OpenAI Embeddings`, Prompt Templates & Streaming](https://www.youtube.com/watch?v=zy6LiK5F5-s)
|
||||
- [LangChain Chains: Use `ChatGPT` to Build Conversational Agents, Summaries and Q&A on Text With LLMs](https://www.youtube.com/watch?v=h1tJZQPcimM)
|
||||
- [Analyze Custom CSV Data with `GPT-4` using Langchain](https://www.youtube.com/watch?v=Ew3sGdX8at4)
|
||||
- ⛓ [Build ChatGPT Chatbots with LangChain Memory: Understanding and Implementing Memory in Conversations](https://youtu.be/CyuUlf54wTs)
|
||||
|
||||
---------------------
|
||||
⛓ icon marks a new video [last update 2023-05-15]
|
||||
@@ -1,44 +1,54 @@
|
||||
# Concepts
|
||||
# Glossary
|
||||
|
||||
These are concepts and terminology commonly used when developing LLM applications.
|
||||
This is a collection of terminology commonly used when developing LLM applications.
|
||||
It contains reference to external papers or sources where the concept was first introduced,
|
||||
as well as to places in LangChain where the concept is used.
|
||||
|
||||
## Chain of Thought
|
||||
## Chain of Thought Prompting
|
||||
|
||||
`Chain of Thought (CoT)` is a prompting technique used to encourage the model to generate a series of intermediate reasoning steps.
|
||||
A prompting technique used to encourage the model to generate a series of intermediate reasoning steps.
|
||||
A less formal way to induce this behavior is to include “Let’s think step-by-step” in the prompt.
|
||||
|
||||
Resources:
|
||||
|
||||
- [Chain-of-Thought Paper](https://arxiv.org/pdf/2201.11903.pdf)
|
||||
- [Step-by-Step Paper](https://arxiv.org/abs/2112.00114)
|
||||
|
||||
## Action Plan Generation
|
||||
|
||||
`Action Plan Generation` is a prompting technique that uses a language model to generate actions to take.
|
||||
A prompt usage that uses a language model to generate actions to take.
|
||||
The results of these actions can then be fed back into the language model to generate a subsequent action.
|
||||
|
||||
Resources:
|
||||
|
||||
- [WebGPT Paper](https://arxiv.org/pdf/2112.09332.pdf)
|
||||
- [SayCan Paper](https://say-can.github.io/assets/palm_saycan.pdf)
|
||||
|
||||
## ReAct
|
||||
## ReAct Prompting
|
||||
|
||||
`ReAct` is a prompting technique that combines Chain-of-Thought prompting with action plan generation.
|
||||
A prompting technique that combines Chain-of-Thought prompting with action plan generation.
|
||||
This induces the to model to think about what action to take, then take it.
|
||||
|
||||
Resources:
|
||||
|
||||
- [Paper](https://arxiv.org/pdf/2210.03629.pdf)
|
||||
- [LangChain Example](../modules/agents/agents/examples/react.ipynb)
|
||||
- [LangChain Example](modules/agents/agents/examples/react.ipynb)
|
||||
|
||||
## Self-ask
|
||||
|
||||
`Self-ask` is a prompting method that builds on top of chain-of-thought prompting.
|
||||
A prompting method that builds on top of chain-of-thought prompting.
|
||||
In this method, the model explicitly asks itself follow-up questions, which are then answered by an external search engine.
|
||||
|
||||
Resources:
|
||||
|
||||
- [Paper](https://ofir.io/self-ask.pdf)
|
||||
- [LangChain Example](../modules/agents/agents/examples/self_ask_with_search.ipynb)
|
||||
- [LangChain Example](modules/agents/agents/examples/self_ask_with_search.ipynb)
|
||||
|
||||
## Prompt Chaining
|
||||
|
||||
`Prompt Chaining` is combining multiple LLM calls, with the output of one-step being the input to the next.
|
||||
Combining multiple LLM calls together, with the output of one-step being the input to the next.
|
||||
|
||||
Resources:
|
||||
|
||||
- [PromptChainer Paper](https://arxiv.org/pdf/2203.06566.pdf)
|
||||
- [Language Model Cascades](https://arxiv.org/abs/2207.10342)
|
||||
@@ -47,29 +57,34 @@ In this method, the model explicitly asks itself follow-up questions, which are
|
||||
|
||||
## Memetic Proxy
|
||||
|
||||
`Memetic Proxy` is encouraging the LLM
|
||||
to respond in a certain way framing the discussion in a context that the model knows of and that
|
||||
will result in that type of response.
|
||||
For example, as a conversation between a student and a teacher.
|
||||
Encouraging the LLM to respond in a certain way framing the discussion in a context that the model knows of and that will result in that type of response. For example, as a conversation between a student and a teacher.
|
||||
|
||||
Resources:
|
||||
|
||||
- [Paper](https://arxiv.org/pdf/2102.07350.pdf)
|
||||
|
||||
## Self Consistency
|
||||
|
||||
`Self Consistency` is a decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
|
||||
A decoding strategy that samples a diverse set of reasoning paths and then selects the most consistent answer.
|
||||
Is most effective when combined with Chain-of-thought prompting.
|
||||
|
||||
Resources:
|
||||
|
||||
- [Paper](https://arxiv.org/pdf/2203.11171.pdf)
|
||||
|
||||
## Inception
|
||||
|
||||
`Inception` is also called `First Person Instruction`.
|
||||
It is encouraging the model to think a certain way by including the start of the model’s response in the prompt.
|
||||
Also called “First Person Instruction”.
|
||||
Encouraging the model to think a certain way by including the start of the model’s response in the prompt.
|
||||
|
||||
Resources:
|
||||
|
||||
- [Example](https://twitter.com/goodside/status/1583262455207460865?s=20&t=8Hz7XBnK1OF8siQrxxCIGQ)
|
||||
|
||||
## MemPrompt
|
||||
|
||||
`MemPrompt` maintains a memory of errors and user feedback, and uses them to prevent repetition of mistakes.
|
||||
MemPrompt maintains a memory of errors and user feedback, and uses them to prevent repetition of mistakes.
|
||||
|
||||
Resources:
|
||||
|
||||
- [Paper](https://memprompt.com/)
|
||||
103
docs/index.rst
103
docs/index.rst
@@ -1,63 +1,51 @@
|
||||
Welcome to LangChain
|
||||
==========================
|
||||
|
||||
| **LangChain** is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be:
|
||||
1. *Data-aware*: connect a language model to other sources of data
|
||||
2. *Agentic*: allow a language model to interact with its environment
|
||||
LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model via an API, but will also:
|
||||
|
||||
| The LangChain framework is designed around these principles.
|
||||
- *Be data-aware*: connect a language model to other sources of data
|
||||
- *Be agentic*: allow a language model to interact with its environment
|
||||
|
||||
| This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see `here <https://docs.langchain.com/docs/>`_. For the JavaScript documentation, see `here <https://js.langchain.com/docs/>`_.
|
||||
The LangChain framework is designed with the above principles in mind.
|
||||
|
||||
This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see `here <https://docs.langchain.com/docs/>`_. For the JavaScript documentation, see `here <https://js.langchain.com/docs/>`_.
|
||||
|
||||
Getting Started
|
||||
----------------
|
||||
|
||||
| How to get started using LangChain to create an Language Model application.
|
||||
Checkout the below guide for a walkthrough of how to get started using LangChain to create an Language Model application.
|
||||
|
||||
- `Quickstart Guide <./getting_started/getting_started.html>`_
|
||||
|
||||
| Concepts and terminology.
|
||||
|
||||
- `Concepts and terminology <./getting_started/concepts.html>`_
|
||||
|
||||
| Tutorials created by community experts and presented on YouTube.
|
||||
|
||||
- `Tutorials <./getting_started/tutorials.html>`_
|
||||
- `Getting Started Documentation <./getting_started/getting_started.html>`_
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:maxdepth: 1
|
||||
:caption: Getting Started
|
||||
:name: getting_started
|
||||
:hidden:
|
||||
|
||||
getting_started/getting_started.md
|
||||
getting_started/concepts.md
|
||||
getting_started/tutorials.md
|
||||
|
||||
|
||||
Modules
|
||||
-----------
|
||||
|
||||
| These modules are the core abstractions which we view as the building blocks of any LLM-powered application.
|
||||
For each module LangChain provides standard, extendable interfaces. LanghChain also provides external integrations and even end-to-end implementations for off-the-shelf use.
|
||||
There are several main modules that LangChain provides support for.
|
||||
For each module we provide some examples to get started, how-to guides, reference docs, and conceptual guides.
|
||||
These modules are, in increasing order of complexity:
|
||||
|
||||
| The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides.
|
||||
- `Models <./modules/models.html>`_: The various model types and model integrations LangChain supports.
|
||||
|
||||
| The modules are (from least to most complex):
|
||||
- `Prompts <./modules/prompts.html>`_: This includes prompt management, prompt optimization, and prompt serialization.
|
||||
|
||||
- `Models <./modules/models.html>`_: Supported model types and integrations.
|
||||
- `Memory <./modules/memory.html>`_: Memory is the concept of persisting state between calls of a chain/agent. LangChain provides a standard interface for memory, a collection of memory implementations, and examples of chains/agents that use memory.
|
||||
|
||||
- `Prompts <./modules/prompts.html>`_: Prompt management, optimization, and serialization.
|
||||
- `Indexes <./modules/indexes.html>`_: Language models are often more powerful when combined with your own text data - this module covers best practices for doing exactly that.
|
||||
|
||||
- `Memory <./modules/memory.html>`_: Memory refers to state that is persisted between calls of a chain/agent.
|
||||
- `Chains <./modules/chains.html>`_: Chains go beyond just a single LLM call, and are sequences of calls (whether to an LLM or a different utility). LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
|
||||
|
||||
- `Indexes <./modules/indexes.html>`_: Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data.
|
||||
- `Agents <./modules/agents.html>`_: Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end to end agents.
|
||||
|
||||
- `Chains <./modules/chains.html>`_: Chains are structured sequences of calls (to an LLM or to a different utility).
|
||||
- `Callbacks <./modules/callbacks/getting_started.html>`_: It can be difficult to track all that occurs inside a chain or agent - callbacks help add a level of observability and introspection.
|
||||
|
||||
- `Agents <./modules/agents.html>`_: An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete.
|
||||
|
||||
- `Callbacks <./modules/callbacks/getting_started.html>`_: Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
@@ -76,29 +64,29 @@ For each module LangChain provides standard, extendable interfaces. LanghChain a
|
||||
Use Cases
|
||||
----------
|
||||
|
||||
| Best practices and built-in implementations for common LangChain use cases:
|
||||
The above modules can be used in a variety of ways. LangChain also provides guidance and assistance in this. Below are some of the common use cases LangChain supports.
|
||||
|
||||
- `Autonomous Agents <./use_cases/autonomous_agents.html>`_: Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.
|
||||
- `Autonomous Agents <./use_cases/autonomous_agents.html>`_: Autonomous agents are long running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI.
|
||||
|
||||
- `Agent Simulations <./use_cases/agent_simulations.html>`_: Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities.
|
||||
- `Agent Simulations <./use_cases/agent_simulations.html>`_: Putting agents in a sandbox and observing how they interact with each other or to events can be an interesting way to observe their long-term memory abilities.
|
||||
|
||||
- `Personal Assistants <./use_cases/personal_assistants.html>`_: One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
|
||||
- `Personal Assistants <./use_cases/personal_assistants.html>`_: The main LangChain use case. Personal assistants need to take actions, remember interactions, and have knowledge about your data.
|
||||
|
||||
- `Question Answering <./use_cases/question_answering.html>`_: Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
|
||||
- `Question Answering <./use_cases/question_answering.html>`_: The second big LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer.
|
||||
|
||||
- `Chatbots <./use_cases/chatbots.html>`_: Language models love to chat, making this a very natural use of them.
|
||||
- `Chatbots <./use_cases/chatbots.html>`_: Since language models are good at producing text, that makes them ideal for creating chatbots.
|
||||
|
||||
- `Querying Tabular Data <./use_cases/tabular.html>`_: Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc).
|
||||
- `Querying Tabular Data <./use_cases/tabular.html>`_: If you want to understand how to use LLMs to query data that is stored in a tabular format (csvs, SQL, dataframes, etc) you should read this page.
|
||||
|
||||
- `Code Understanding <./use_cases/code.html>`_: Recommended reading if you want to use language models to analyze code.
|
||||
- `Code Understanding <./use_cases/code.html>`_: If you want to understand how to use LLMs to query source code from github, you should read this page.
|
||||
|
||||
- `Interacting with APIs <./use_cases/apis.html>`_: Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions.
|
||||
- `Interacting with APIs <./use_cases/apis.html>`_: Enabling LLMs to interact with APIs is extremely powerful in order to give them more up-to-date information and allow them to take actions.
|
||||
|
||||
- `Extraction <./use_cases/extraction.html>`_: Extract structured information from text.
|
||||
|
||||
- `Summarization <./use_cases/summarization.html>`_: Compressing longer documents. A type of Data-Augmented Generation.
|
||||
- `Summarization <./use_cases/summarization.html>`_: Summarizing longer documents into shorter, more condensed chunks of information. A type of Data Augmented Generation.
|
||||
|
||||
- `Evaluation <./use_cases/evaluation.html>`_: Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation.
|
||||
- `Evaluation <./use_cases/evaluation.html>`_: Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
|
||||
|
||||
|
||||
.. toctree::
|
||||
@@ -107,9 +95,9 @@ Use Cases
|
||||
:name: use_cases
|
||||
:hidden:
|
||||
|
||||
./use_cases/personal_assistants.md
|
||||
./use_cases/autonomous_agents.md
|
||||
./use_cases/agent_simulations.md
|
||||
./use_cases/personal_assistants.md
|
||||
./use_cases/question_answering.md
|
||||
./use_cases/chatbots.md
|
||||
./use_cases/tabular.rst
|
||||
@@ -123,7 +111,7 @@ Use Cases
|
||||
Reference Docs
|
||||
---------------
|
||||
|
||||
| Full documentation on all methods, classes, installation methods, and integration setups for LangChain.
|
||||
All of LangChain's reference documentation, in one place. Full documentation on all methods, classes, installation methods, and integration setups for LangChain.
|
||||
|
||||
|
||||
- `Reference Documentation <./reference.html>`_
|
||||
@@ -141,7 +129,7 @@ Reference Docs
|
||||
LangChain Ecosystem
|
||||
-------------------
|
||||
|
||||
| Guides for how other companies/products can be used with LangChain.
|
||||
Guides for how other companies/products can be used with LangChain
|
||||
|
||||
- `LangChain Ecosystem <./ecosystem.html>`_
|
||||
|
||||
@@ -158,21 +146,23 @@ LangChain Ecosystem
|
||||
Additional Resources
|
||||
---------------------
|
||||
|
||||
| Additional resources we think may be useful as you develop your application!
|
||||
Additional collection of resources we think may be useful as you develop your application!
|
||||
|
||||
- `LangChainHub <https://github.com/hwchase17/langchain-hub>`_: The LangChainHub is a place to share and explore other prompts, chains, and agents.
|
||||
|
||||
- `Gallery <./additional_resources/gallery.html>`_: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.
|
||||
- `Glossary <./glossary.html>`_: A glossary of all related terms, papers, methods, etc. Whether implemented in LangChain or not!
|
||||
|
||||
- `Deployments <./additional_resources/deployments.html>`_: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
|
||||
- `Gallery <./gallery.html>`_: A collection of our favorite projects that use LangChain. Useful for finding inspiration or seeing how things were done in other applications.
|
||||
|
||||
- `Tracing <./additional_resources/tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
|
||||
- `Deployments <./deployments.html>`_: A collection of instructions, code snippets, and template repositories for deploying LangChain apps.
|
||||
|
||||
- `Model Laboratory <./additional_resources/model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
|
||||
- `Tracing <./tracing.html>`_: A guide on using tracing in LangChain to visualize the execution of chains and agents.
|
||||
|
||||
- `Model Laboratory <./model_laboratory.html>`_: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so.
|
||||
|
||||
- `Discord <https://discord.gg/6adMQxSpJS>`_: Join us on our Discord to discuss all things LangChain!
|
||||
|
||||
- `YouTube <./additional_resources/youtube.html>`_: A collection of the LangChain tutorials and videos.
|
||||
- `YouTube <./youtube.html>`_: A collection of the LangChain tutorials and videos.
|
||||
|
||||
- `Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>`_: As you move your LangChains into production, we'd love to offer more comprehensive support. Please fill out this form and we'll set up a dedicated support Slack channel.
|
||||
|
||||
@@ -184,10 +174,11 @@ Additional Resources
|
||||
:hidden:
|
||||
|
||||
LangChainHub <https://github.com/hwchase17/langchain-hub>
|
||||
./additional_resources/gallery.rst
|
||||
./additional_resources/deployments.md
|
||||
./additional_resources/tracing.md
|
||||
./additional_resources/model_laboratory.ipynb
|
||||
./glossary.md
|
||||
./gallery.rst
|
||||
./deployments.md
|
||||
./tracing.md
|
||||
./use_cases/model_laboratory.ipynb
|
||||
Discord <https://discord.gg/6adMQxSpJS>
|
||||
./additional_resources/youtube.md
|
||||
./youtube.md
|
||||
Production Support <https://forms.gle/57d8AmXBYp8PP8tZA>
|
||||
|
||||
@@ -1,480 +1,396 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ba5f8741",
|
||||
"metadata": {
|
||||
"id": "ba5f8741"
|
||||
},
|
||||
"source": [
|
||||
"# Custom LLM Agent (with a ChatModel)\n",
|
||||
"\n",
|
||||
"This notebook goes through how to create your own custom agent based on a chat model.\n",
|
||||
"\n",
|
||||
"An LLM chat agent consists of three parts:\n",
|
||||
"\n",
|
||||
"- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do\n",
|
||||
"- ChatModel: This is the language model that powers the agent\n",
|
||||
"- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found\n",
|
||||
"- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:\n",
|
||||
"1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)\n",
|
||||
"2. If the Agent returns an `AgentFinish`, then return that directly to the user\n",
|
||||
"3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`\n",
|
||||
"4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.\n",
|
||||
" \n",
|
||||
"`AgentAction` is a response that consists of `action` and `action_input`. `action` refers to which tool to use, and `action_input` refers to the input to that tool. `log` can also be provided as more context (that can be used for logging, tracing, etc).\n",
|
||||
"\n",
|
||||
"`AgentFinish` is a response that contains the final message to be sent back to the user. This should be used to end an agent run.\n",
|
||||
" \n",
|
||||
"In this notebook we walk through how to create a custom LLM agent."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fea4812c",
|
||||
"metadata": {
|
||||
"id": "fea4812c"
|
||||
},
|
||||
"source": [
|
||||
"## Set up environment\n",
|
||||
"\n",
|
||||
"Do necessary imports, etc."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install langchain\n",
|
||||
"!pip install google-search-results\n",
|
||||
"!pip install openai"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "mvxi3g8DExu6"
|
||||
},
|
||||
"id": "mvxi3g8DExu6",
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "9af9734e",
|
||||
"metadata": {
|
||||
"id": "9af9734e"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser\n",
|
||||
"from langchain.prompts import BaseChatPromptTemplate\n",
|
||||
"from langchain import SerpAPIWrapper, LLMChain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from typing import List, Union\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish, HumanMessage\n",
|
||||
"import re\n",
|
||||
"from getpass import getpass"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6df0253f",
|
||||
"metadata": {
|
||||
"id": "6df0253f"
|
||||
},
|
||||
"source": [
|
||||
"## Set up tool\n",
|
||||
"\n",
|
||||
"Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"SERPAPI_API_KEY = getpass()"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "LcSV8a5bFSDE"
|
||||
},
|
||||
"id": "LcSV8a5bFSDE",
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "becda2a1",
|
||||
"metadata": {
|
||||
"id": "becda2a1"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Define which tools the agent can use to answer user queries\n",
|
||||
"search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name = \"Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events\"\n",
|
||||
" )\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2e7a075c",
|
||||
"metadata": {
|
||||
"id": "2e7a075c"
|
||||
},
|
||||
"source": [
|
||||
"## Prompt Template\n",
|
||||
"\n",
|
||||
"This instructs the agent on what to do. Generally, the template should incorporate:\n",
|
||||
" \n",
|
||||
"- `tools`: which tools the agent has access and how and when to call them.\n",
|
||||
"- `intermediate_steps`: These are tuples of previous (`AgentAction`, `Observation`) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.\n",
|
||||
"- `input`: generic user input"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "339b1bb8",
|
||||
"metadata": {
|
||||
"id": "339b1bb8"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set up the base template\n",
|
||||
"template = \"\"\"Complete the objective as best you can. You have access to the following tools:\n",
|
||||
"\n",
|
||||
"{tools}\n",
|
||||
"\n",
|
||||
"Use the following format:\n",
|
||||
"\n",
|
||||
"Question: the input question you must answer\n",
|
||||
"Thought: you should always think about what to do\n",
|
||||
"Action: the action to take, should be one of [{tool_names}]\n",
|
||||
"Action Input: the input to the action\n",
|
||||
"Observation: the result of the action\n",
|
||||
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
|
||||
"Thought: I now know the final answer\n",
|
||||
"Final Answer: the final answer to the original input question\n",
|
||||
"\n",
|
||||
"These were previous tasks you completed:\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Begin!\n",
|
||||
"\n",
|
||||
"Question: {input}\n",
|
||||
"{agent_scratchpad}\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "fd969d31",
|
||||
"metadata": {
|
||||
"id": "fd969d31"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set up a prompt template\n",
|
||||
"class CustomPromptTemplate(BaseChatPromptTemplate):\n",
|
||||
" # The template to use\n",
|
||||
" template: str\n",
|
||||
" # The list of tools available\n",
|
||||
" tools: List[Tool]\n",
|
||||
" \n",
|
||||
" def format_messages(self, **kwargs) -> str:\n",
|
||||
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
|
||||
" # Format them in a particular way\n",
|
||||
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
|
||||
" thoughts = \"\"\n",
|
||||
" for action, observation in intermediate_steps:\n",
|
||||
" thoughts += action.log\n",
|
||||
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
|
||||
" # Set the agent_scratchpad variable to that value\n",
|
||||
" kwargs[\"agent_scratchpad\"] = thoughts\n",
|
||||
" # Create a tools variable from the list of tools provided\n",
|
||||
" kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in self.tools])\n",
|
||||
" # Create a list of tool names for the tools provided\n",
|
||||
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in self.tools])\n",
|
||||
" formatted = self.template.format(**kwargs)\n",
|
||||
" return [HumanMessage(content=formatted)]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "798ef9fb",
|
||||
"metadata": {
|
||||
"id": "798ef9fb"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = CustomPromptTemplate(\n",
|
||||
" template=template,\n",
|
||||
" tools=tools,\n",
|
||||
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
|
||||
" # This includes the `intermediate_steps` variable because that is needed\n",
|
||||
" input_variables=[\"input\", \"intermediate_steps\"]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ef3a1af3",
|
||||
"metadata": {
|
||||
"id": "ef3a1af3"
|
||||
},
|
||||
"source": [
|
||||
"## Output Parser\n",
|
||||
"\n",
|
||||
"The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.\n",
|
||||
"\n",
|
||||
"This is where you can change the parsing to do retries, handle whitespace, etc"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "7c6fe0d3",
|
||||
"metadata": {
|
||||
"id": "7c6fe0d3"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class CustomOutputParser(AgentOutputParser):\n",
|
||||
" \n",
|
||||
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
|
||||
" # Check if agent should finish\n",
|
||||
" if \"Final Answer:\" in llm_output:\n",
|
||||
" return AgentFinish(\n",
|
||||
" # Return values is generally always a dictionary with a single `output` key\n",
|
||||
" # It is not recommended to try anything else at the moment :)\n",
|
||||
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
|
||||
" log=llm_output,\n",
|
||||
" )\n",
|
||||
" # Parse out the action and action input\n",
|
||||
" regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n",
|
||||
" match = re.search(regex, llm_output, re.DOTALL)\n",
|
||||
" if not match:\n",
|
||||
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
|
||||
" action = match.group(1).strip()\n",
|
||||
" action_input = match.group(2)\n",
|
||||
" # Return the action and action input\n",
|
||||
" return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "d278706a",
|
||||
"metadata": {
|
||||
"id": "d278706a"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"output_parser = CustomOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "170587b1",
|
||||
"metadata": {
|
||||
"id": "170587b1"
|
||||
},
|
||||
"source": [
|
||||
"## Set up LLM\n",
|
||||
"\n",
|
||||
"Choose the LLM you want to use!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"OPENAI_API_KEY = getpass()"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "V8UM02AfGyYa"
|
||||
},
|
||||
"id": "V8UM02AfGyYa",
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "f9d4c374",
|
||||
"metadata": {
|
||||
"id": "f9d4c374"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "caeab5e4",
|
||||
"metadata": {
|
||||
"id": "caeab5e4"
|
||||
},
|
||||
"source": [
|
||||
"## Define the stop sequence\n",
|
||||
"\n",
|
||||
"This is important because it tells the LLM when to stop generation.\n",
|
||||
"\n",
|
||||
"This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an `Observation` (otherwise, the LLM may hallucinate an observation for you)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "34be9f65",
|
||||
"metadata": {
|
||||
"id": "34be9f65"
|
||||
},
|
||||
"source": [
|
||||
"## Set up the Agent\n",
|
||||
"\n",
|
||||
"We can now combine everything to set up our agent"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "9b1cc2a2",
|
||||
"metadata": {
|
||||
"id": "9b1cc2a2"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# LLM chain consisting of the LLM and a prompt\n",
|
||||
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "e4f5092f",
|
||||
"metadata": {
|
||||
"id": "e4f5092f"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tool_names = [tool.name for tool in tools]\n",
|
||||
"agent = LLMSingleActionAgent(\n",
|
||||
" llm_chain=llm_chain, \n",
|
||||
" output_parser=output_parser,\n",
|
||||
" stop=[\"\\nObservation:\"], \n",
|
||||
" allowed_tools=tool_names\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "aa8a5326",
|
||||
"metadata": {
|
||||
"id": "aa8a5326"
|
||||
},
|
||||
"source": [
|
||||
"## Use the Agent\n",
|
||||
"\n",
|
||||
"Now we can use it!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "490604e9",
|
||||
"metadata": {
|
||||
"id": "490604e9"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "653b1617",
|
||||
"metadata": {
|
||||
"id": "653b1617",
|
||||
"outputId": "82f7dc8f-c09f-46f3-ae45-9acf7e4e3d94",
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 264
|
||||
}
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThought: I should use a reliable search engine to get accurate information.\n",
|
||||
"Action: Search\n",
|
||||
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3mHe went on to date Gisele Bündchen, Bar Refaeli, Blake Lively, Toni Garrn and Nina Agdal, among others, before finally settling down with current girlfriend Camila Morrone, who is 23 years his junior.\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mI have found the answer to the question.\n",
|
||||
"Final Answer: Leo DiCaprio's current girlfriend is Camila Morrone.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "execute_result",
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Leo DiCaprio's current girlfriend is Camila Morrone.\""
|
||||
],
|
||||
"application/vnd.google.colaboratory.intrinsic+json": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"metadata": {},
|
||||
"execution_count": 15
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.run(\"Search for Leo DiCaprio's girlfriend on the internet.\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
|
||||
}
|
||||
},
|
||||
"colab": {
|
||||
"provenance": []
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ba5f8741",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Custom LLM Agent (with a ChatModel)\n",
|
||||
"\n",
|
||||
"This notebook goes through how to create your own custom agent based on a chat model.\n",
|
||||
"\n",
|
||||
"An LLM chat agent consists of three parts:\n",
|
||||
"\n",
|
||||
"- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do\n",
|
||||
"- ChatModel: This is the language model that powers the agent\n",
|
||||
"- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found\n",
|
||||
"- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"The LLMAgent is used in an AgentExecutor. This AgentExecutor can largely be thought of as a loop that:\n",
|
||||
"1. Passes user input and any previous steps to the Agent (in this case, the LLMAgent)\n",
|
||||
"2. If the Agent returns an `AgentFinish`, then return that directly to the user\n",
|
||||
"3. If the Agent returns an `AgentAction`, then use that to call a tool and get an `Observation`\n",
|
||||
"4. Repeat, passing the `AgentAction` and `Observation` back to the Agent until an `AgentFinish` is emitted.\n",
|
||||
" \n",
|
||||
"`AgentAction` is a response that consists of `action` and `action_input`. `action` refers to which tool to use, and `action_input` refers to the input to that tool. `log` can also be provided as more context (that can be used for logging, tracing, etc).\n",
|
||||
"\n",
|
||||
"`AgentFinish` is a response that contains the final message to be sent back to the user. This should be used to end an agent run.\n",
|
||||
" \n",
|
||||
"In this notebook we walk through how to create a custom LLM agent."
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "fea4812c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up environment\n",
|
||||
"\n",
|
||||
"Do necessary imports, etc."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "9af9734e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import Tool, AgentExecutor, LLMSingleActionAgent, AgentOutputParser\n",
|
||||
"from langchain.prompts import BaseChatPromptTemplate\n",
|
||||
"from langchain import SerpAPIWrapper, LLMChain\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from typing import List, Union\n",
|
||||
"from langchain.schema import AgentAction, AgentFinish, HumanMessage\n",
|
||||
"import re"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6df0253f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up tool\n",
|
||||
"\n",
|
||||
"Set up any tools the agent may want to use. This may be necessary to put in the prompt (so that the agent knows to use these tools)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "becda2a1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Define which tools the agent can use to answer user queries\n",
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name = \"Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events\"\n",
|
||||
" )\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "2e7a075c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prompt Template\n",
|
||||
"\n",
|
||||
"This instructs the agent on what to do. Generally, the template should incorporate:\n",
|
||||
" \n",
|
||||
"- `tools`: which tools the agent has access and how and when to call them.\n",
|
||||
"- `intermediate_steps`: These are tuples of previous (`AgentAction`, `Observation`) pairs. These are generally not passed directly to the model, but the prompt template formats them in a specific way.\n",
|
||||
"- `input`: generic user input"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "339b1bb8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set up the base template\n",
|
||||
"template = \"\"\"Complete the objective as best you can. You have access to the following tools:\n",
|
||||
"\n",
|
||||
"{tools}\n",
|
||||
"\n",
|
||||
"Use the following format:\n",
|
||||
"\n",
|
||||
"Question: the input question you must answer\n",
|
||||
"Thought: you should always think about what to do\n",
|
||||
"Action: the action to take, should be one of [{tool_names}]\n",
|
||||
"Action Input: the input to the action\n",
|
||||
"Observation: the result of the action\n",
|
||||
"... (this Thought/Action/Action Input/Observation can repeat N times)\n",
|
||||
"Thought: I now know the final answer\n",
|
||||
"Final Answer: the final answer to the original input question\n",
|
||||
"\n",
|
||||
"These were previous tasks you completed:\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Begin!\n",
|
||||
"\n",
|
||||
"Question: {input}\n",
|
||||
"{agent_scratchpad}\"\"\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 13,
|
||||
"id": "fd969d31",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set up a prompt template\n",
|
||||
"class CustomPromptTemplate(BaseChatPromptTemplate):\n",
|
||||
" # The template to use\n",
|
||||
" template: str\n",
|
||||
" # The list of tools available\n",
|
||||
" tools: List[Tool]\n",
|
||||
" \n",
|
||||
" def format_messages(self, **kwargs) -> str:\n",
|
||||
" # Get the intermediate steps (AgentAction, Observation tuples)\n",
|
||||
" # Format them in a particular way\n",
|
||||
" intermediate_steps = kwargs.pop(\"intermediate_steps\")\n",
|
||||
" thoughts = \"\"\n",
|
||||
" for action, observation in intermediate_steps:\n",
|
||||
" thoughts += action.log\n",
|
||||
" thoughts += f\"\\nObservation: {observation}\\nThought: \"\n",
|
||||
" # Set the agent_scratchpad variable to that value\n",
|
||||
" kwargs[\"agent_scratchpad\"] = thoughts\n",
|
||||
" # Create a tools variable from the list of tools provided\n",
|
||||
" kwargs[\"tools\"] = \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in self.tools])\n",
|
||||
" # Create a list of tool names for the tools provided\n",
|
||||
" kwargs[\"tool_names\"] = \", \".join([tool.name for tool in self.tools])\n",
|
||||
" formatted = self.template.format(**kwargs)\n",
|
||||
" return [HumanMessage(content=formatted)]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"id": "798ef9fb",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"prompt = CustomPromptTemplate(\n",
|
||||
" template=template,\n",
|
||||
" tools=tools,\n",
|
||||
" # This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically\n",
|
||||
" # This includes the `intermediate_steps` variable because that is needed\n",
|
||||
" input_variables=[\"input\", \"intermediate_steps\"]\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ef3a1af3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Output Parser\n",
|
||||
"\n",
|
||||
"The output parser is responsible for parsing the LLM output into `AgentAction` and `AgentFinish`. This usually depends heavily on the prompt used.\n",
|
||||
"\n",
|
||||
"This is where you can change the parsing to do retries, handle whitespace, etc"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 15,
|
||||
"id": "7c6fe0d3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"class CustomOutputParser(AgentOutputParser):\n",
|
||||
" \n",
|
||||
" def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:\n",
|
||||
" # Check if agent should finish\n",
|
||||
" if \"Final Answer:\" in llm_output:\n",
|
||||
" return AgentFinish(\n",
|
||||
" # Return values is generally always a dictionary with a single `output` key\n",
|
||||
" # It is not recommended to try anything else at the moment :)\n",
|
||||
" return_values={\"output\": llm_output.split(\"Final Answer:\")[-1].strip()},\n",
|
||||
" log=llm_output,\n",
|
||||
" )\n",
|
||||
" # Parse out the action and action input\n",
|
||||
" regex = r\"Action\\s*\\d*\\s*:(.*?)\\nAction\\s*\\d*\\s*Input\\s*\\d*\\s*:[\\s]*(.*)\"\n",
|
||||
" match = re.search(regex, llm_output, re.DOTALL)\n",
|
||||
" if not match:\n",
|
||||
" raise ValueError(f\"Could not parse LLM output: `{llm_output}`\")\n",
|
||||
" action = match.group(1).strip()\n",
|
||||
" action_input = match.group(2)\n",
|
||||
" # Return the action and action input\n",
|
||||
" return AgentAction(tool=action, tool_input=action_input.strip(\" \").strip('\"'), log=llm_output)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "d278706a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"output_parser = CustomOutputParser()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "170587b1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up LLM\n",
|
||||
"\n",
|
||||
"Choose the LLM you want to use!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 17,
|
||||
"id": "f9d4c374",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm = ChatOpenAI(temperature=0)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "caeab5e4",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Define the stop sequence\n",
|
||||
"\n",
|
||||
"This is important because it tells the LLM when to stop generation.\n",
|
||||
"\n",
|
||||
"This depends heavily on the prompt and model you are using. Generally, you want this to be whatever token you use in the prompt to denote the start of an `Observation` (otherwise, the LLM may hallucinate an observation for you)."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "34be9f65",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Set up the Agent\n",
|
||||
"\n",
|
||||
"We can now combine everything to set up our agent"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 18,
|
||||
"id": "9b1cc2a2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# LLM chain consisting of the LLM and a prompt\n",
|
||||
"llm_chain = LLMChain(llm=llm, prompt=prompt)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 19,
|
||||
"id": "e4f5092f",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"tool_names = [tool.name for tool in tools]\n",
|
||||
"agent = LLMSingleActionAgent(\n",
|
||||
" llm_chain=llm_chain, \n",
|
||||
" output_parser=output_parser,\n",
|
||||
" stop=[\"\\nObservation:\"], \n",
|
||||
" allowed_tools=tool_names\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "aa8a5326",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use the Agent\n",
|
||||
"\n",
|
||||
"Now we can use it!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 20,
|
||||
"id": "490604e9",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 21,
|
||||
"id": "653b1617",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mThought: I should use a reliable search engine to get accurate information.\n",
|
||||
"Action: Search\n",
|
||||
"Action Input: \"Leo DiCaprio girlfriend\"\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation:\u001b[36;1m\u001b[1;3mHe went on to date Gisele Bündchen, Bar Refaeli, Blake Lively, Toni Garrn and Nina Agdal, among others, before finally settling down with current girlfriend Camila Morrone, who is 23 years his junior.\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mI have found the answer to the question.\n",
|
||||
"Final Answer: Leo DiCaprio's current girlfriend is Camila Morrone.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Leo DiCaprio's current girlfriend is Camila Morrone.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 21,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_executor.run(\"Search for Leo DiCaprio's girlfriend on the internet.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "adefb4c2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
},
|
||||
"vscode": {
|
||||
"interpreter": {
|
||||
"hash": "18784188d7ecd866c0586ac068b02361a6896dc3a29b64f5cc957f09c590acef"
|
||||
}
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
@@ -1,383 +1,386 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4658d71a",
|
||||
"metadata": {
|
||||
"id": "4658d71a"
|
||||
},
|
||||
"source": [
|
||||
"# Conversation Agent (for Chat Models)\n",
|
||||
"\n",
|
||||
"This notebook walks through using an agent optimized for conversation, using ChatModels. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.\n",
|
||||
"\n",
|
||||
"This is accomplished with a specific type of agent (`chat-conversational-react-description`) which expects to be used with a memory component."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"!pip install langchain\n",
|
||||
"!pip install google-search-results\n",
|
||||
"!pip install openai"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "efpRpEwvNXU5"
|
||||
},
|
||||
"id": "efpRpEwvNXU5",
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "f65308ab",
|
||||
"metadata": {
|
||||
"id": "f65308ab"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.agents import Tool\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.utilities import SerpAPIWrapper\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.agents import AgentType\n",
|
||||
"from getpass import getpass"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"SERPAPI_API_KEY = getpass()"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "qMOoW5QYNlPQ"
|
||||
},
|
||||
"id": "qMOoW5QYNlPQ",
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "5fb14d6d",
|
||||
"metadata": {
|
||||
"id": "5fb14d6d"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name = \"Current Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events or the current state of the world. the input to this should be a single search term.\"\n",
|
||||
" ),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "dddc34c4",
|
||||
"metadata": {
|
||||
"id": "dddc34c4"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"source": [
|
||||
"OPENAI_API_KEY = getpass()"
|
||||
],
|
||||
"metadata": {
|
||||
"id": "pJWcpWnoN56_"
|
||||
},
|
||||
"id": "pJWcpWnoN56_",
|
||||
"execution_count": null,
|
||||
"outputs": []
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "cafe9bc1",
|
||||
"metadata": {
|
||||
"id": "cafe9bc1"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm=ChatOpenAI(openai_api_key=OPENAI_API_KEY, temperature=0)\n",
|
||||
"agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "dc70b454",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 192
|
||||
},
|
||||
"id": "dc70b454",
|
||||
"outputId": "9e3d6857-72de-472f-b531-9a7b843f1621"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Hello Bob! How can I assist you today?\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "execute_result",
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Hello Bob! How can I assist you today?'"
|
||||
],
|
||||
"application/vnd.google.colaboratory.intrinsic+json": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"metadata": {},
|
||||
"execution_count": 8
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(input=\"hi, i am bob\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "3dcf7953",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 192
|
||||
},
|
||||
"id": "3dcf7953",
|
||||
"outputId": "9afdbf2c-ceed-4835-9975-0841dd2162d6"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Your name is Bob.\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "execute_result",
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Your name is Bob.'"
|
||||
],
|
||||
"application/vnd.google.colaboratory.intrinsic+json": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"metadata": {},
|
||||
"execution_count": 9
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(input=\"what's my name?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "aa05f566",
|
||||
"metadata": {
|
||||
"scrolled": false,
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 316
|
||||
},
|
||||
"id": "aa05f566",
|
||||
"outputId": "d38fe468-6c94-450a-9f07-0044bf7beb34"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Current Search\",\n",
|
||||
" \"action_input\": \"Thai food dinner recipes\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3m64 easy Thai recipes for any night of the week · Thai curry noodle soup · Thai yellow cauliflower, snake bean and tofu curry · Thai-spiced chicken hand pies · Thai ...\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier.\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "execute_result",
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Here are some Thai food dinner recipes you can try this week: Thai curry noodle soup, Thai yellow cauliflower, snake bean and tofu curry, Thai-spiced chicken hand pies, and many more. You can find the full list of recipes at the source I found earlier.'"
|
||||
],
|
||||
"application/vnd.google.colaboratory.intrinsic+json": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"metadata": {},
|
||||
"execution_count": 10
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(\"what are some good dinners to make this week, if i like thai food?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"id": "c5d8b7ea",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 192
|
||||
},
|
||||
"id": "c5d8b7ea",
|
||||
"outputId": "105db01e-c0f7-4b82-edd9-ea02a02fc66a"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"The last letter in your name is 'b'. Argentina won the World Cup in 1978.\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "execute_result",
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"The last letter in your name is 'b'. Argentina won the World Cup in 1978.\""
|
||||
],
|
||||
"application/vnd.google.colaboratory.intrinsic+json": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"metadata": {},
|
||||
"execution_count": 11
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(input=\"tell me the last letter in my name, and also tell me who won the world cup in 1978?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"id": "f608889b",
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 278
|
||||
},
|
||||
"id": "f608889b",
|
||||
"outputId": "49ea0e17-d8cd-4de9-e119-e6006caea32f"
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"output_type": "stream",
|
||||
"name": "stdout",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Current Search\",\n",
|
||||
" \"action_input\": \"weather in pomfret\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mCloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"output_type": "execute_result",
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Cloudy with showers. Low around 55F. Winds S at 5 to 10 mph. Chance of rain 60%. Humidity76%.'"
|
||||
],
|
||||
"application/vnd.google.colaboratory.intrinsic+json": {
|
||||
"type": "string"
|
||||
}
|
||||
},
|
||||
"metadata": {},
|
||||
"execution_count": 12
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(input=\"whats the weather like in pomfret?\")"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
},
|
||||
"colab": {
|
||||
"provenance": []
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "4658d71a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Conversation Agent (for Chat Models)\n",
|
||||
"\n",
|
||||
"This notebook walks through using an agent optimized for conversation, using ChatModels. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.\n",
|
||||
"\n",
|
||||
"This is accomplished with a specific type of agent (`chat-conversational-react-description`) which expects to be used with a memory component."
|
||||
]
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "f4f5d1a8",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"os.environ[\"LANGCHAIN_HANDLER\"] = \"langchain\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "f65308ab",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WARNING:root:Failed to default session, using empty session: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /sessions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10a1767c0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.agents import Tool\n",
|
||||
"from langchain.memory import ConversationBufferMemory\n",
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.utilities import SerpAPIWrapper\n",
|
||||
"from langchain.agents import initialize_agent\n",
|
||||
"from langchain.agents import AgentType"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "5fb14d6d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name = \"Current Search\",\n",
|
||||
" func=search.run,\n",
|
||||
" description=\"useful for when you need to answer questions about current events or the current state of the world. the input to this should be a single search term.\"\n",
|
||||
" ),\n",
|
||||
"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "dddc34c4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"memory = ConversationBufferMemory(memory_key=\"chat_history\", return_messages=True)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "cafe9bc1",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"llm=ChatOpenAI(temperature=0)\n",
|
||||
"agent_chain = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "dc70b454",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fab40d0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Hello Bob! How can I assist you today?\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Hello Bob! How can I assist you today?'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(input=\"hi, i am bob\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "3dcf7953",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fab44f0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Your name is Bob.\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Your name is Bob.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(input=\"what's my name?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "aa05f566",
|
||||
"metadata": {
|
||||
"scrolled": false
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Current Search\",\n",
|
||||
" \"action_input\": \"Thai food dinner recipes\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3m59 easy Thai recipes for any night of the week · Marion Grasby's Thai spicy chilli and basil fried rice · Thai curry noodle soup · Marion Grasby's Thai Spicy ...\u001b[0m\n",
|
||||
"Thought:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fae8be0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Here are some Thai food dinner recipes you can make this week: Thai spicy chilli and basil fried rice, Thai curry noodle soup, and Thai Spicy ... (59 recipes in total).\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Here are some Thai food dinner recipes you can make this week: Thai spicy chilli and basil fried rice, Thai curry noodle soup, and Thai Spicy ... (59 recipes in total).'"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(\"what are some good dinners to make this week, if i like thai food?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"id": "c5d8b7ea",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m```json\n",
|
||||
"{\n",
|
||||
" \"action\": \"Current Search\",\n",
|
||||
" \"action_input\": \"who won the world cup in 1978\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3mArgentina national football team\u001b[0m\n",
|
||||
"Thought:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fae86d0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[32;1m\u001b[1;3m```json\n",
|
||||
"{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"The last letter in your name is 'b', and the winner of the 1978 World Cup was the Argentina national football team.\"\n",
|
||||
"}\n",
|
||||
"```\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"The last letter in your name is 'b', and the winner of the 1978 World Cup was the Argentina national football team.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(input=\"tell me the last letter in my name, and also tell me who won the world cup in 1978?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"id": "f608889b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Current Search\",\n",
|
||||
" \"action_input\": \"weather in pomfret\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3m10 Day Weather-Pomfret, CT ; Sun 16. 64° · 50°. 24% · NE 7 mph ; Mon 17. 58° · 45°. 70% · ESE 8 mph ; Tue 18. 57° · 37°. 8% · WSW 15 mph.\u001b[0m\n",
|
||||
"Thought:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"WARNING:root:Failed to persist run: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /chain-runs (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x13fa9d7f0>: Failed to establish a new connection: [Errno 61] Connection refused'))\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\u001b[32;1m\u001b[1;3m{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"The weather in Pomfret, CT for the next 10 days is as follows: Sun 16. 64° · 50°. 24% · NE 7 mph ; Mon 17. 58° · 45°. 70% · ESE 8 mph ; Tue 18. 57° · 37°. 8% · WSW 15 mph.\"\n",
|
||||
"}\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The weather in Pomfret, CT for the next 10 days is as follows: Sun 16. 64° · 50°. 24% · NE 7 mph ; Mon 17. 58° · 45°. 70% · ESE 8 mph ; Tue 18. 57° · 37°. 8% · WSW 15 mph.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"agent_chain.run(input=\"whats the weather like in pomfret?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "0084efd6",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
|
||||
@@ -42,7 +42,7 @@
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"llm_math_chain = LLMMathChain(llm=llm, verbose=True)\n",
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///../../../../../notebooks/Chinook.db\")\n",
|
||||
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)\n",
|
||||
"db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name = \"Search\",\n",
|
||||
|
||||
@@ -44,7 +44,7 @@
|
||||
"search = SerpAPIWrapper()\n",
|
||||
"llm_math_chain = LLMMathChain(llm=llm1, verbose=True)\n",
|
||||
"db = SQLDatabase.from_uri(\"sqlite:///../../../../../notebooks/Chinook.db\")\n",
|
||||
"db_chain = SQLDatabaseChain.from_llm(llm1, db, verbose=True)\n",
|
||||
"db_chain = SQLDatabaseChain(llm=llm1, database=db, verbose=True)\n",
|
||||
"tools = [\n",
|
||||
" Tool(\n",
|
||||
" name = \"Search\",\n",
|
||||
|
||||
@@ -194,18 +194,14 @@
|
||||
"\n",
|
||||
"\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3m28 years\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mPrevious steps: steps=[(Step(value=\"Search for Leo DiCaprio's girlfriend on the internet.\"), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.')), (Step(value='Find her current age.'), StepResponse(response='28 years'))]\n",
|
||||
"\n",
|
||||
"Current objective: None\n",
|
||||
"\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mBased on my search, Gigi Hadid's current age is 26 years old. \n",
|
||||
"Action:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Gigi Hadid's current age is 28 years.\"\n",
|
||||
" \"action_input\": \"Gigi Hadid's current age is 26 years old.\"\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
@@ -213,39 +209,64 @@
|
||||
"\n",
|
||||
"Step: Find her current age.\n",
|
||||
"\n",
|
||||
"Response: Gigi Hadid's current age is 28 years.\n",
|
||||
"Response: Gigi Hadid's current age is 26 years old.\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Calculator\",\n",
|
||||
" \"action_input\": \"28 ** 0.43\"\n",
|
||||
" \"action_input\": \"26 ** 0.43\"\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
|
||||
"28 ** 0.43\u001b[32;1m\u001b[1;3m\n",
|
||||
"26 ** 0.43\u001b[32;1m\u001b[1;3m\n",
|
||||
"```text\n",
|
||||
"28 ** 0.43\n",
|
||||
"26 ** 0.43\n",
|
||||
"```\n",
|
||||
"...numexpr.evaluate(\"28 ** 0.43\")...\n",
|
||||
"...numexpr.evaluate(\"26 ** 0.43\")...\n",
|
||||
"\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3m4.1906168361987195\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3m4.059182145592686\u001b[0m\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.1906168361987195\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThe next step is to provide the answer to the user's question.\n",
|
||||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.059182145592686\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThe current objective is to raise Gigi Hadid's age to the 0.43 power. \n",
|
||||
"\n",
|
||||
"Action:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Calculator\",\n",
|
||||
" \"action_input\": \"26 ** 0.43\"\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new LLMMathChain chain...\u001b[0m\n",
|
||||
"26 ** 0.43\u001b[32;1m\u001b[1;3m\n",
|
||||
"```text\n",
|
||||
"26 ** 0.43\n",
|
||||
"```\n",
|
||||
"...numexpr.evaluate(\"26 ** 0.43\")...\n",
|
||||
"\u001b[0m\n",
|
||||
"Answer: \u001b[33;1m\u001b[1;3m4.059182145592686\u001b[0m\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
"\n",
|
||||
"Observation: \u001b[33;1m\u001b[1;3mAnswer: 4.059182145592686\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3mThe answer to the current objective is 4.059182145592686.\n",
|
||||
"\n",
|
||||
"Action:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\"\n",
|
||||
" \"action_input\": \"Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\"\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n",
|
||||
@@ -253,14 +274,14 @@
|
||||
"\n",
|
||||
"Step: Raise her current age to the 0.43 power using a calculator or programming language.\n",
|
||||
"\n",
|
||||
"Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\n",
|
||||
"Response: Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"The result is approximately 4.19.\"\n",
|
||||
" \"action_input\": \"Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\"\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\u001b[0m\n",
|
||||
@@ -270,14 +291,14 @@
|
||||
"\n",
|
||||
"Step: Output the result.\n",
|
||||
"\n",
|
||||
"Response: The result is approximately 4.19.\n",
|
||||
"Response: Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mAction:\n",
|
||||
"```\n",
|
||||
"{\n",
|
||||
" \"action\": \"Final Answer\",\n",
|
||||
" \"action_input\": \"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\"\n",
|
||||
" \"action_input\": \"Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\"\n",
|
||||
"}\n",
|
||||
"```\n",
|
||||
"\u001b[0m\n",
|
||||
@@ -289,14 +310,14 @@
|
||||
"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\n",
|
||||
"Response: Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.\""
|
||||
"\"Gigi Hadid's age raised to the 0.43 power is approximately 4.059 years.\""
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
|
||||
@@ -1,149 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"\n",
|
||||
"# GraphQL tool\n",
|
||||
"This Jupyter Notebook demonstrates how to use the BaseGraphQLTool component with an Agent.\n",
|
||||
"\n",
|
||||
"GraphQL is a query language for APIs and a runtime for executing those queries against your data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.\n",
|
||||
"\n",
|
||||
"By including a BaseGraphQLTool in the list of tools provided to an Agent, you can grant your Agent the ability to query data from GraphQL APIs for any purposes you need.\n",
|
||||
"\n",
|
||||
"In this example, we'll be using the public Star Wars GraphQL API available at the following endpoint: https://swapi-graphql.netlify.app/.netlify/functions/index.\n",
|
||||
"\n",
|
||||
"First, you need to install httpx and gql Python packages."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "shellscript"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"pip install httpx gql > /dev/null"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, let's create a BaseGraphQLTool instance with the specified Star Wars API endpoint and initialize an Agent with the tool."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain import OpenAI\n",
|
||||
"from langchain.agents import load_tools, initialize_agent, AgentType\n",
|
||||
"from langchain.utilities import GraphQLAPIWrapper\n",
|
||||
"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"\n",
|
||||
"tools = load_tools([\"graphql\"], graphql_endpoint=\"https://swapi-graphql.netlify.app/.netlify/functions/index\", llm=llm)\n",
|
||||
"\n",
|
||||
"agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Now, we can use the Agent to run queries against the Star Wars GraphQL API. Let's ask the Agent to list all the Star Wars films and their release dates."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3m I need to query the graphql database to get the titles of all the star wars films\n",
|
||||
"Action: query_graphql\n",
|
||||
"Action Input: query { allFilms { films { title } } }\u001b[0m\n",
|
||||
"Observation: \u001b[36;1m\u001b[1;3m\"{\\n \\\"allFilms\\\": {\\n \\\"films\\\": [\\n {\\n \\\"title\\\": \\\"A New Hope\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Empire Strikes Back\\\"\\n },\\n {\\n \\\"title\\\": \\\"Return of the Jedi\\\"\\n },\\n {\\n \\\"title\\\": \\\"The Phantom Menace\\\"\\n },\\n {\\n \\\"title\\\": \\\"Attack of the Clones\\\"\\n },\\n {\\n \\\"title\\\": \\\"Revenge of the Sith\\\"\\n }\\n ]\\n }\\n}\"\u001b[0m\n",
|
||||
"Thought:\u001b[32;1m\u001b[1;3m I now know the titles of all the star wars films\n",
|
||||
"Final Answer: The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'The titles of all the star wars films are: A New Hope, The Empire Strikes Back, Return of the Jedi, The Phantom Menace, Attack of the Clones, and Revenge of the Sith.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"graphql_fields = \"\"\"allFilms {\n",
|
||||
" films {\n",
|
||||
" title\n",
|
||||
" director\n",
|
||||
" releaseDate\n",
|
||||
" speciesConnection {\n",
|
||||
" species {\n",
|
||||
" name\n",
|
||||
" classification\n",
|
||||
" homeworld {\n",
|
||||
" name\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
" }\n",
|
||||
"\n",
|
||||
"\"\"\"\n",
|
||||
"\n",
|
||||
"suffix = \"Search for the titles of all the stawars films stored in the graphql database that has this schema \"\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"agent.run(suffix + graphql_fields)"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"interpreter": {
|
||||
"hash": "f85209c3c4c190dca7367d6a1e623da50a9a4392fd53313a7cf9d4bda9c4b85b"
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3.9.16 ('langchain')",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
},
|
||||
"orig_nbformat": 4
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
@@ -19,7 +19,7 @@
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Requires transformers>=4.29.0 and huggingface_hub>=0.14.1\n",
|
||||
"!pip install --upgrade transformers huggingface_hub > /dev/null"
|
||||
"!pip install --uprade transformers huggingface_hub > /dev/null"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -60,15 +60,6 @@
|
||||
"docs = [Document(page_content=t) for t in texts[:3]]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "21284c47",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Quickstart\n",
|
||||
"If you just want to get started as quickly as possible, this is the recommended way to do it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
@@ -79,6 +70,15 @@
|
||||
"from langchain.chains.summarize import load_summarize_chain"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "21284c47",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Quickstart\n",
|
||||
"If you just want to get started as quickly as possible, this is the recommended way to do it:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
|
||||
@@ -108,7 +108,6 @@ We need access tokens and sometime other parameters to get access to these datas
|
||||
./document_loaders/examples/confluence.ipynb
|
||||
./document_loaders/examples/diffbot.ipynb
|
||||
./document_loaders/examples/discord_loader.ipynb
|
||||
./document_loaders/examples/docugami.ipynb
|
||||
./document_loaders/examples/duckdb.ipynb
|
||||
./document_loaders/examples/figma.ipynb
|
||||
./document_loaders/examples/gitbook.ipynb
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
"id": "66a7777e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# BiliBili\n",
|
||||
"# Bilibili\n",
|
||||
"\n",
|
||||
">[Bilibili](https://www.bilibili.tv/) is one of the most beloved long-form video sites in China.\n",
|
||||
"\n",
|
||||
@@ -35,7 +35,7 @@
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import BiliBiliLoader"
|
||||
"from langchain.document_loaders.bilibili import BiliBiliLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
||||
@@ -1,406 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Docugami\n",
|
||||
"This notebook covers how to load documents from `Docugami`. See [here](../../../../ecosystem/docugami.md) for more details, and the advantages of using this system over alternative data loaders.\n",
|
||||
"\n",
|
||||
"## Prerequisites\n",
|
||||
"1. Follow the Quick Start section in [this document](../../../../ecosystem/docugami.md)\n",
|
||||
"2. Grab an access token for your workspace, and make sure it is set as the DOCUGAMI_API_KEY environment variable\n",
|
||||
"3. Grab some docset and document IDs for your processed documents, as described here: https://help.docugami.com/home/docugami-api"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# You need the lxml package to use the DocugamiLoader\n",
|
||||
"!poetry run pip -q install lxml"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from langchain.document_loaders import DocugamiLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Load Documents\n",
|
||||
"\n",
|
||||
"If the DOCUGAMI_API_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the `access_token` parameter."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Document(page_content='MUTUAL NON-DISCLOSURE AGREEMENT This Mutual Non-Disclosure Agreement (this “ Agreement ”) is entered into and made effective as of April 4 , 2018 between Docugami Inc. , a Delaware corporation , whose address is 150 Lake Street South , Suite 221 , Kirkland , Washington 98033 , and Caleb Divine , an individual, whose address is 1201 Rt 300 , Newburgh NY 12550 .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:ThisMutualNon-disclosureAgreement', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'ThisMutualNon-disclosureAgreement'}),\n",
|
||||
" Document(page_content='The above named parties desire to engage in discussions regarding a potential agreement or other transaction between the parties (the “Purpose”). In connection with such discussions, it may be necessary for the parties to disclose to each other certain confidential information or materials to enable them to evaluate whether to enter into such agreement or transaction.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Discussions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Discussions'}),\n",
|
||||
" Document(page_content='In consideration of the foregoing, the parties agree as follows:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Consideration', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'Consideration'}),\n",
|
||||
" Document(page_content='1. Confidential Information . For purposes of this Agreement , “ Confidential Information ” means any information or materials disclosed by one party to the other party that: (i) if disclosed in writing or in the form of tangible materials, is marked “confidential” or “proprietary” at the time of such disclosure; (ii) if disclosed orally or by visual presentation, is identified as “confidential” or “proprietary” at the time of such disclosure, and is summarized in a writing sent by the disclosing party to the receiving party within thirty ( 30 ) days after any such disclosure; or (iii) due to its nature or the circumstances of its disclosure, a person exercising reasonable business judgment would understand to be confidential or proprietary.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Purposes/docset:ConfidentialInformation-section/docset:ConfidentialInformation[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ConfidentialInformation'}),\n",
|
||||
" Document(page_content=\"2. Obligations and Restrictions . Each party agrees: (i) to maintain the other party's Confidential Information in strict confidence; (ii) not to disclose such Confidential Information to any third party; and (iii) not to use such Confidential Information for any purpose except for the Purpose. Each party may disclose the other party’s Confidential Information to its employees and consultants who have a bona fide need to know such Confidential Information for the Purpose, but solely to the extent necessary to pursue the Purpose and for no other purpose; provided, that each such employee and consultant first executes a written agreement (or is otherwise already bound by a written agreement) that contains use and nondisclosure restrictions at least as protective of the other party’s Confidential Information as those set forth in this Agreement .\", metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Obligations/docset:ObligationsAndRestrictions-section/docset:ObligationsAndRestrictions', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ObligationsAndRestrictions'}),\n",
|
||||
" Document(page_content='3. Exceptions. The obligations and restrictions in Section 2 will not apply to any information or materials that:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Exceptions/docset:Exceptions-section/docset:Exceptions[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Exceptions'}),\n",
|
||||
" Document(page_content='(i) were, at the date of disclosure, or have subsequently become, generally known or available to the public through no act or failure to act by the receiving party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheDate/docset:TheDate', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheDate'}),\n",
|
||||
" Document(page_content='(ii) were rightfully known by the receiving party prior to receiving such information or materials from the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:SuchInformation/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}),\n",
|
||||
" Document(page_content='(iii) are rightfully acquired by the receiving party from a third party who has the right to disclose such information or materials without breach of any confidentiality obligation to the disclosing party;', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheDate/docset:TheReceivingParty/docset:TheReceivingParty', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheReceivingParty'}),\n",
|
||||
" Document(page_content='4. Compelled Disclosure . Nothing in this Agreement will be deemed to restrict a party from disclosing the other party’s Confidential Information to the extent required by any order, subpoena, law, statute or regulation; provided, that the party required to make such a disclosure uses reasonable efforts to give the other party reasonable advance notice of such required disclosure in order to enable the other party to prevent or limit such disclosure.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Disclosure/docset:CompelledDisclosure-section/docset:CompelledDisclosure', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'CompelledDisclosure'}),\n",
|
||||
" Document(page_content='5. Return of Confidential Information . Upon the completion or abandonment of the Purpose, and in any event upon the disclosing party’s request, the receiving party will promptly return to the disclosing party all tangible items and embodiments containing or consisting of the disclosing party’s Confidential Information and all copies thereof (including electronic copies), and any notes, analyses, compilations, studies, interpretations, memoranda or other documents (regardless of the form thereof) prepared by or on behalf of the receiving party that contain or are based upon the disclosing party’s Confidential Information .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheCompletion/docset:ReturnofConfidentialInformation-section/docset:ReturnofConfidentialInformation', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'ReturnofConfidentialInformation'}),\n",
|
||||
" Document(page_content='6. No Obligations . Each party retains the right to determine whether to disclose any Confidential Information to the other party.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoObligations/docset:NoObligations-section/docset:NoObligations[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoObligations'}),\n",
|
||||
" Document(page_content='7. No Warranty. ALL CONFIDENTIAL INFORMATION IS PROVIDED BY THE DISCLOSING PARTY “AS IS ”.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:NoWarranty/docset:NoWarranty-section/docset:NoWarranty[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'NoWarranty'}),\n",
|
||||
" Document(page_content='8. Term. This Agreement will remain in effect for a period of seven ( 7 ) years from the date of last disclosure of Confidential Information by either party, at which time it will terminate.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:ThisAgreement/docset:Term-section/docset:Term', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Term'}),\n",
|
||||
" Document(page_content='9. Equitable Relief . Each party acknowledges that the unauthorized use or disclosure of the disclosing party’s Confidential Information may cause the disclosing party to incur irreparable harm and significant damages, the degree of which may be difficult to ascertain. Accordingly, each party agrees that the disclosing party will have the right to seek immediate equitable relief to enjoin any unauthorized use or disclosure of its Confidential Information , in addition to any other rights and remedies that it may have at law or otherwise.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:EquitableRelief/docset:EquitableRelief-section/docset:EquitableRelief[2]', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'EquitableRelief'}),\n",
|
||||
" Document(page_content='10. Non-compete. To the maximum extent permitted by applicable law, during the Term of this Agreement and for a period of one ( 1 ) year thereafter, Caleb Divine may not market software products or do business that directly or indirectly competes with Docugami software products .', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:TheMaximumExtent/docset:Non-compete-section/docset:Non-compete', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Non-compete'}),\n",
|
||||
" Document(page_content='11. Miscellaneous. This Agreement will be governed and construed in accordance with the laws of the State of Washington , excluding its body of law controlling conflict of laws. This Agreement is the complete and exclusive understanding and agreement between the parties regarding the subject matter of this Agreement and supersedes all prior agreements, understandings and communications, oral or written, between the parties regarding the subject matter of this Agreement . If any provision of this Agreement is held invalid or unenforceable by a court of competent jurisdiction, that provision of this Agreement will be enforced to the maximum extent permissible and the other provisions of this Agreement will remain in full force and effect. Neither party may assign this Agreement , in whole or in part, by operation of law or otherwise, without the other party’s prior written consent, and any attempted assignment without such consent will be void. This Agreement may be executed in counterparts, each of which will be deemed an original, but all of which together will constitute one and the same instrument.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:MutualNon-disclosure/docset:MUTUALNON-DISCLOSUREAGREEMENT-section/docset:MUTUALNON-DISCLOSUREAGREEMENT/docset:Consideration/docset:Purposes/docset:Accordance/docset:Miscellaneous-section/docset:Miscellaneous', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'div', 'tag': 'Miscellaneous'}),\n",
|
||||
" Document(page_content='[SIGNATURE PAGE FOLLOWS] IN WITNESS WHEREOF, the parties hereto have executed this Mutual Non-Disclosure Agreement by their duly authorized officers or representatives as of the date first set forth above.', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:TheParties', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': 'p', 'tag': 'TheParties'}),\n",
|
||||
" Document(page_content='DOCUGAMI INC . : \\n\\n Caleb Divine : \\n\\n Signature: Signature: Name: \\n\\n Jean Paoli Name: Title: \\n\\n CEO Title:', metadata={'xpath': '/docset:MutualNon-disclosure/docset:Witness/docset:TheParties/docset:DocugamiInc/docset:DocugamiInc/xhtml:table', 'id': '43rj0ds7s0ur', 'name': 'NDA simple layout.docx', 'structure': '', 'tag': 'table'})]"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"DOCUGAMI_API_KEY=os.environ.get('DOCUGAMI_API_KEY')\n",
|
||||
"\n",
|
||||
"# To load all docs in the given docset ID, just don't provide document_ids\n",
|
||||
"loader = DocugamiLoader(docset_id=\"ecxqpipcoe2p\", document_ids=[\"43rj0ds7s0ur\"])\n",
|
||||
"docs = loader.load()\n",
|
||||
"docs"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The `metadata` for each `Document` (really, a chunk of an actual PDF, DOC or DOCX) contains some useful additional information:\n",
|
||||
"\n",
|
||||
"1. **id and name:** ID and Name of the file (PDF, DOC or DOCX) the chunk is sourced from within Docugami.\n",
|
||||
"2. **xpath:** XPath inside the XML representation of the document, for the chunk. Useful for source citations directly to the actual chunk inside the document XML.\n",
|
||||
"3. **structure:** Structural attributes of the chunk, e.g. h1, h2, div, table, td, etc. Useful to filter out certain kinds of chunks if needed by the caller.\n",
|
||||
"4. **tag:** Semantic tag for the chunk, using various generative and extractive techniques. More details here: https://github.com/docugami/DFM-benchmarks"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Basic Use: Docugami Loader for Document QA\n",
|
||||
"\n",
|
||||
"You can use the Docugami Loader like a standard loader for Document QA over multiple docs, albeit with much better chunks that follow the natural contours of the document. There are many great tutorials on how to do this, e.g. [this one](https://www.youtube.com/watch?v=3yPBVii7Ct0). We can just use the same code, but use the `DocugamiLoader` for better chunking, instead of loading text or PDF files directly with basic splitting techniques."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"!poetry run pip -q install openai tiktoken chromadb "
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.schema import Document\n",
|
||||
"from langchain.vectorstores import Chroma\n",
|
||||
"from langchain.embeddings import OpenAIEmbeddings\n",
|
||||
"from langchain.llms import OpenAI\n",
|
||||
"from langchain.chains import RetrievalQA\n",
|
||||
"\n",
|
||||
"# For this example, we already have a processed docset for a set of lease documents\n",
|
||||
"loader = DocugamiLoader(docset_id=\"wh2kned25uqm\")\n",
|
||||
"documents = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The documents returned by the loader are already split, so we don't need to use a text splitter. Optionally, we can use the metadata on each document, for example the structure or tag attributes, to do any post-processing we want.\n",
|
||||
"\n",
|
||||
"We will just use the output of the `DocugamiLoader` as-is to set up a retrieval QA chain the usual way."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Using embedded DuckDB without persistence: data will be transient\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"embedding = OpenAIEmbeddings()\n",
|
||||
"vectordb = Chroma.from_documents(documents=documents, embedding=embedding)\n",
|
||||
"retriever = vectordb.as_retriever()\n",
|
||||
"qa_chain = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(), chain_type=\"stuff\", retriever=retriever, return_source_documents=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'query': 'What can tenants do with signage on their properties?',\n",
|
||||
" 'result': ' Tenants may place signs (digital or otherwise) or other form of identification on the premises after receiving written permission from the landlord which shall not be unreasonably withheld. The tenant is responsible for any damage caused to the premises and must conform to any applicable laws, ordinances, etc. governing the same. The tenant must also remove and clean any window or glass identification promptly upon vacating the premises.',\n",
|
||||
" 'source_documents': [Document(page_content='ARTICLE VI SIGNAGE 6.01 Signage . Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant ’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant ’s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises.', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:Article/docset:ARTICLEVISIGNAGE-section/docset:_601Signage-section/docset:_601Signage', 'id': 'v1bvgaozfkak', 'name': 'TruTone Lane 2.docx', 'structure': 'div', 'tag': '_601Signage', 'Landlord': 'BUBBA CENTER PARTNERSHIP', 'Tenant': 'Truetone Lane LLC'}),\n",
|
||||
" Document(page_content='Signage. Tenant may place or attach to the Premises signs (digital or otherwise) or other such identification as needed after receiving written permission from the Landlord , which permission shall not be unreasonably withheld. Any damage caused to the Premises by the Tenant ’s erecting or removing such signs shall be repaired promptly by the Tenant at the Tenant ’s expense . Any signs or other form of identification allowed must conform to all applicable laws, ordinances, etc. governing the same. Tenant also agrees to have any window or glass identification completely removed and cleaned at its expense promptly upon vacating the Premises. \\n\\n ARTICLE VII UTILITIES 7.01', metadata={'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOFFICELEASEAGREEMENTThis/docset:ArticleIBasic/docset:ArticleIiiUseAndCareOf/docset:ARTICLEIIIUSEANDCAREOFPREMISES-section/docset:ARTICLEIIIUSEANDCAREOFPREMISES/docset:NoOtherPurposes/docset:TenantsResponsibility/dg:chunk', 'id': 'g2fvhekmltza', 'name': 'TruTone Lane 6.pdf', 'structure': 'lim', 'tag': 'chunk', 'Landlord': 'GLORY ROAD LLC', 'Tenant': 'Truetone Lane LLC'}),\n",
|
||||
" Document(page_content='Landlord , its agents, servants, employees, licensees, invitees, and contractors during the last year of the term of this Lease at any and all times during regular business hours, after 24 hour notice to tenant, to pass and repass on and through the Premises, or such portion thereof as may be necessary, in order that they or any of them may gain access to the Premises for the purpose of showing the Premises to potential new tenants or real estate brokers. In addition, Landlord shall be entitled to place a \"FOR RENT \" or \"FOR LEASE\" sign (not exceeding 8.5 ” x 11 ”) in the front window of the Premises during the last six months of the term of this Lease .', metadata={'xpath': '/docset:Rider/docset:RIDERTOLEASE-section/docset:RIDERTOLEASE/docset:FixedRent/docset:TermYearPeriod/docset:Lease/docset:_42FLandlordSAccess-section/docset:_42FLandlordSAccess/docset:LandlordsRights/docset:Landlord', 'id': 'omvs4mysdk6b', 'name': 'TruTone Lane 1.docx', 'structure': 'p', 'tag': 'Landlord', 'Landlord': 'BIRCH STREET , LLC', 'Tenant': 'Trutone Lane LLC'}),\n",
|
||||
" Document(page_content=\"24. SIGNS . No signage shall be placed by Tenant on any portion of the Project . However, Tenant shall be permitted to place a sign bearing its name in a location approved by Landlord near the entrance to the Premises (at Tenant's cost ) and will be furnished a single listing of its name in the Building's directory (at Landlord 's cost ), all in accordance with the criteria adopted from time to time by Landlord for the Project . Any changes or additional listings in the directory shall be furnished (subject to availability of space) for the then Building Standard charge .\", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:TheTerms/docset:Indemnification/docset:INDEMNIFICATION-section/docset:INDEMNIFICATION/docset:Waiver/docset:Waiver/docset:Signs/docset:SIGNS-section/docset:SIGNS', 'id': 'qkn9cyqsiuch', 'name': 'Shorebucks LLC_AZ.pdf', 'structure': 'div', 'tag': 'SIGNS', 'Landlord': 'Menlo Group', 'Tenant': 'Shorebucks LLC'})]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Try out the retriever with an example query\n",
|
||||
"qa_chain(\"What can tenants do with signage on their properties?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using Docugami to Add Metadata to Chunks for High Accuracy Document QA\n",
|
||||
"\n",
|
||||
"One issue with large documents is that the correct answer to your question may depend on chunks that are far apart in the document. Typical chunking techniques, even with overlap, will struggle with providing the LLM sufficent context to answer such questions. With upcoming very large context LLMs, it may be possible to stuff a lot of tokens, perhaps even entire documents, inside the context but this will still hit limits at some point with very long documents, or a lot of documents.\n",
|
||||
"\n",
|
||||
"For example, if we ask a more complex question that requires the LLM to draw on chunks from different parts of the document, even OpenAI's powerful LLM is unable to answer correctly."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' 9,753 square feet'"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain_response = qa_chain(\"What is rentable area for the property owned by DHA Group?\")\n",
|
||||
"chain_response[\"result\"] # the correct answer should be 13,500"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"At first glance the answer may seem reasonable, but if you review the source chunks carefully for this answer, you will see that the chunking of the document did not end up putting the Landlord name and the rentable area in the same context, since they are far apart in the document. The retriever therefore ends up finding unrelated chunks from other documents not even related to the **Menlo Group** landlord. That landlord happens to be mentioned on the first page of the file **Shorebucks LLC_NJ.pdf** file, and while one of the source chunks used by the chain is indeed from that doc that contains the correct answer (**13,500**), other source chunks from different docs are included, and the answer is therefore incorrect."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),\n",
|
||||
" Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),\n",
|
||||
" Document(page_content=\"1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .\", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),\n",
|
||||
" Document(page_content='1.6 Rentable Area of the Premises. 9,753 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:PerryBlair/docset:PerryBlair/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'dsyfhh4vpeyf', 'name': 'Shorebucks LLC_CO.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'Perry & Blair LLC', 'Tenant': 'Shorebucks LLC'})]"
|
||||
]
|
||||
},
|
||||
"execution_count": 9,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"chain_response[\"source_documents\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Docugami can help here. Chunks are annotated with additional metadata created using different techniques if a user has been [using Docugami](https://help.docugami.com/home/reports). More technical approaches will be added later.\n",
|
||||
"\n",
|
||||
"Specifically, let's look at the additional metadata that is returned on the documents returned by docugami, in the form of some simple key/value pairs on all the text chunks:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'xpath': '/docset:OFFICELEASEAGREEMENT-section/docset:OFFICELEASEAGREEMENT/docset:ThisOfficeLeaseAgreement',\n",
|
||||
" 'id': 'v1bvgaozfkak',\n",
|
||||
" 'name': 'TruTone Lane 2.docx',\n",
|
||||
" 'structure': 'p',\n",
|
||||
" 'tag': 'ThisOfficeLeaseAgreement',\n",
|
||||
" 'Landlord': 'BUBBA CENTER PARTNERSHIP',\n",
|
||||
" 'Tenant': 'Truetone Lane LLC'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 10,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"loader = DocugamiLoader(docset_id=\"wh2kned25uqm\")\n",
|
||||
"documents = loader.load()\n",
|
||||
"documents[0].metadata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We can use a [self-querying retriever](../../retrievers/examples/self_query_retriever.ipynb) to improve our query accuracy, using this additional metadata:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 11,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Using embedded DuckDB without persistence: data will be transient\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains.query_constructor.schema import AttributeInfo\n",
|
||||
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
|
||||
"\n",
|
||||
"EXCLUDE_KEYS = [\"id\", \"xpath\", \"structure\"]\n",
|
||||
"metadata_field_info = [\n",
|
||||
" AttributeInfo(\n",
|
||||
" name=key,\n",
|
||||
" description=f\"The {key} for this chunk\",\n",
|
||||
" type=\"string\",\n",
|
||||
" )\n",
|
||||
" for key in documents[0].metadata\n",
|
||||
" if key.lower() not in EXCLUDE_KEYS\n",
|
||||
"]\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"document_content_description = \"Contents of this chunk\"\n",
|
||||
"llm = OpenAI(temperature=0)\n",
|
||||
"vectordb = Chroma.from_documents(documents=documents, embedding=embedding)\n",
|
||||
"retriever = SelfQueryRetriever.from_llm(\n",
|
||||
" llm, vectordb, document_content_description, metadata_field_info, verbose=True\n",
|
||||
")\n",
|
||||
"qa_chain = RetrievalQA.from_chain_type(\n",
|
||||
" llm=OpenAI(), chain_type=\"stuff\", retriever=retriever, return_source_documents=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Let's run the same question again. It returns the correct result since all the chunks have metadata key/value pairs on them carrying key information about the document even if this infromation is physically very far away from the source chunk used to generate the answer."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"query='rentable area' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='Landlord', value='DHA Group')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'query': 'What is rentable area for the property owned by DHA Group?',\n",
|
||||
" 'result': ' 13,500 square feet.',\n",
|
||||
" 'source_documents': [Document(page_content='1.1 Landlord . DHA Group , a Delaware limited liability company authorized to transact business in New Jersey .', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:DhaGroup/docset:Landlord-section/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),\n",
|
||||
" Document(page_content='WITNESSES: LANDLORD: DHA Group , a Delaware limited liability company', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Guaranty-section/docset:Guaranty[2]/docset:SIGNATURESONNEXTPAGE-section/docset:INWITNESSWHEREOF-section/docset:INWITNESSWHEREOF/docset:Behalf/docset:Witnesses/xhtml:table/xhtml:tbody/xhtml:tr[3]/xhtml:td[2]/docset:DhaGroup', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'p', 'tag': 'DhaGroup', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),\n",
|
||||
" Document(page_content=\"1.16 Landlord 's Notice Address . DHA Group , Suite 1010 , 111 Bauer Dr , Oakland , New Jersey , 07436 , with a copy to the Building Management Office at the Project , Attention: On - Site Property Manager .\", metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:GrossRentCreditTheRentCredit-section/docset:GrossRentCreditTheRentCredit/docset:Period/docset:ApplicableSalesTax/docset:PercentageRent/docset:PercentageRent/docset:NoticeAddress[2]/docset:LandlordsNoticeAddress-section/docset:LandlordsNoticeAddress[2]', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'LandlordsNoticeAddress', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'}),\n",
|
||||
" Document(page_content='1.6 Rentable Area of the Premises. 13,500 square feet . This square footage figure includes an add-on factor for Common Areas in the Building and has been agreed upon by the parties as final and correct and is not subject to challenge or dispute by either party.', metadata={'xpath': '/docset:OFFICELEASE-section/docset:OFFICELEASE/docset:THISOFFICELEASE/docset:WITNESSETH-section/docset:WITNESSETH/docset:TheTerms/dg:chunk/docset:BasicLeaseInformation/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS-section/docset:BASICLEASEINFORMATIONANDDEFINEDTERMS/docset:DhaGroup/docset:DhaGroup/docset:Premises[2]/docset:RentableAreaofthePremises-section/docset:RentableAreaofthePremises', 'id': 'md8rieecquyv', 'name': 'Shorebucks LLC_NJ.pdf', 'structure': 'div', 'tag': 'RentableAreaofthePremises', 'Landlord': 'DHA Group', 'Tenant': 'Shorebucks LLC'})]}"
|
||||
]
|
||||
},
|
||||
"execution_count": 12,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"qa_chain(\"What is rentable area for the property owned by DHA Group?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"This time the answer is correct, since the self-querying retriever created a filter on the landlord attribute of the metadata, correctly filtering to document that specifically is about the DHA Group landlord. The resulting source chunks are all relevant to this landlord, and this improves answer accuracy even though the landlord is not directly mentioned in the specific chunk that contains the correct answer."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.10"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 4
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
|
||||
xmlns:xhtml="http://www.w3.org/1999/xhtml">
|
||||
|
||||
<url>
|
||||
<loc>https://python.langchain.com/en/stable/</loc>
|
||||
|
||||
|
||||
<lastmod>2023-05-04T16:15:31.377584+00:00</lastmod>
|
||||
|
||||
<changefreq>weekly</changefreq>
|
||||
<priority>1</priority>
|
||||
</url>
|
||||
|
||||
<url>
|
||||
<loc>https://python.langchain.com/en/latest/</loc>
|
||||
|
||||
|
||||
<lastmod>2023-05-05T07:52:19.633878+00:00</lastmod>
|
||||
|
||||
<changefreq>daily</changefreq>
|
||||
<priority>0.9</priority>
|
||||
</url>
|
||||
|
||||
<url>
|
||||
<loc>https://python.langchain.com/en/harrison-docs-refactor-3-24/</loc>
|
||||
|
||||
|
||||
<lastmod>2023-03-27T02:32:55.132916+00:00</lastmod>
|
||||
|
||||
<changefreq>monthly</changefreq>
|
||||
<priority>0.8</priority>
|
||||
</url>
|
||||
|
||||
</urlset>
|
||||
@@ -97,7 +97,7 @@
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"name": "stdin",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"OpenAI API Key: ········\n"
|
||||
@@ -673,68 +673,6 @@
|
||||
"docs = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "45bb0415",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Using pdfplumber\n",
|
||||
"\n",
|
||||
"Like PyMuPDF, the output Documents contain detailed metadata about the PDF and its pages, and returns one document per page."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "aefa758d",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import PDFPlumberLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "049e9d9a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = PDFPlumberLoader(\"example_data/layout-parser-paper.pdf\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "a8610efa",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"data = loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "8132e551",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"Document(page_content='LayoutParser: A Unified Toolkit for Deep\\nLearning Based Document Image Analysis\\nZejiang Shen1 ((cid:0)), Ruochen Zhang2, Melissa Dell3, Benjamin Charles Germain\\nLee4, Jacob Carlson3, and Weining Li5\\n1 Allen Institute for AI\\n1202 shannons@allenai.org\\n2 Brown University\\nruochen zhang@brown.edu\\n3 Harvard University\\nnuJ {melissadell,jacob carlson}@fas.harvard.edu\\n4 University of Washington\\nbcgl@cs.washington.edu\\n12 5 University of Waterloo\\nw422li@uwaterloo.ca\\n]VC.sc[\\nAbstract. Recentadvancesindocumentimageanalysis(DIA)havebeen\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomescouldbeeasilydeployedinproductionandextendedforfurther\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model configurations complicate the easy reuse of im-\\n2v84351.3012:viXra portantinnovationsbyawideaudience.Thoughtherehavebeenon-going\\nefforts to improve reusability and simplify deep learning (DL) model\\ndevelopmentindisciplineslikenaturallanguageprocessingandcomputer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademicresearchacross awiderangeof disciplinesinthesocialsciences\\nand humanities. This paper introduces LayoutParser, an open-source\\nlibrary for streamlining the usage of DL in DIA research and applica-\\ntions. The core LayoutParser library comes with a set of simple and\\nintuitiveinterfacesforapplyingandcustomizingDLmodelsforlayoutde-\\ntection,characterrecognition,andmanyotherdocumentprocessingtasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digiti-\\nzation pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\nThe library is publicly available at https://layout-parser.github.io.\\nKeywords: DocumentImageAnalysis·DeepLearning·LayoutAnalysis\\n· Character Recognition · Open Source library · Toolkit.\\n1 Introduction\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocumentimageanalysis(DIA)tasksincludingdocumentimageclassification[11,', metadata={'source': 'example_data/layout-parser-paper.pdf', 'file_path': 'example_data/layout-parser-paper.pdf', 'page': 1, 'total_pages': 16, 'Author': '', 'CreationDate': 'D:20210622012710Z', 'Creator': 'LaTeX with hyperref', 'Keywords': '', 'ModDate': 'D:20210622012710Z', 'PTEX.Fullbanner': 'This is pdfTeX, Version 3.14159265-2.6-1.40.21 (TeX Live 2020) kpathsea version 6.3.2', 'Producer': 'pdfTeX-1.40.21', 'Subject': '', 'Title': '', 'Trapped': 'False'})"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"data[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -760,7 +698,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
"version": "3.11.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -108,9 +108,7 @@
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 14,
|
||||
"metadata": {
|
||||
"scrolled": true
|
||||
},
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
@@ -127,34 +125,6 @@
|
||||
"documents[0]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Local Sitemap\n",
|
||||
"\n",
|
||||
"The sitemap loader can also be used to load local files."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Fetching pages: 100%|####################################################################################################################################| 3/3 [00:00<00:00, 3.91it/s]\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"sitemap_loader = SitemapLoader(web_path=\"example_data/sitemap.xml\", is_local=True)\n",
|
||||
"\n",
|
||||
"docs = sitemap_loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
@@ -179,7 +149,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.document_loaders import TelegramChatFileLoader, TelegramChatApiLoader"
|
||||
"from langchain.document_loaders import TelegramChatLoader"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -29,7 +29,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = TelegramChatFileLoader(\"example_data/telegram.json\")"
|
||||
"loader = TelegramChatLoader(\"example_data/telegram.json\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -41,7 +41,7 @@
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Document(page_content=\"Henry on 2020-01-01T00:00:02: It's 2020...\\n\\nHenry on 2020-01-01T00:00:04: Fireworks!\\n\\nGrace 🧤 ðŸ\\x8d’ on 2020-01-01T00:00:05: You're a minute late!\\n\\n\", metadata={'source': 'example_data/telegram.json'})]"
|
||||
"[Document(page_content=\"Henry on 2020-01-01T00:00:02: It's 2020...\\n\\nHenry on 2020-01-01T00:00:04: Fireworks!\\n\\nGrace 🧤 ðŸ\\x8d’ on 2020-01-01T00:00:05: You're a minute late!\\n\\n\", lookup_str='', metadata={'source': 'example_data/telegram.json'}, lookup_index=0)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
@@ -54,49 +54,10 @@
|
||||
]
|
||||
},
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3e64cac2",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"`TelegramChatApiLoader` loads data directly from any specified chat from Telegram. In order to export the data, you will need to authenticate your Telegram account. \n",
|
||||
"\n",
|
||||
"You can get the API_HASH and API_ID from https://my.telegram.org/auth?to=apps\n",
|
||||
"\n",
|
||||
"chat_entity – recommended to be the [entity](https://docs.telethon.dev/en/stable/concepts/entities.html?highlight=Entity#what-is-an-entity) of a channel.\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f05f75f3",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader = TelegramChatApiLoader(\n",
|
||||
" chat_entity=\"<CHAT_URL>\", # recommended to use Entity here\n",
|
||||
" api_hash=\"<API HASH >\", \n",
|
||||
" api_id=\"<API_ID>\", \n",
|
||||
" user_name =\"\", # needed only for caching the session.\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "40039f7b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"loader.load()"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "18e5af2b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
}
|
||||
@@ -117,7 +78,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.13"
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -32,7 +32,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": 2,
|
||||
"id": "cb4a5787",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -46,7 +46,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 3,
|
||||
"id": "bcbe04d9",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
@@ -83,7 +83,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 4,
|
||||
"id": "86e34dbf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -138,7 +138,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"query='dinosaur' filter=None limit=None\n"
|
||||
"query='dinosaur' filter=None\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -170,7 +170,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5) limit=None\n"
|
||||
"query=' ' filter=Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -200,7 +200,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None\n"
|
||||
"query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig')\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -229,7 +229,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction')]) limit=None\n"
|
||||
"query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='science fiction'), Comparison(comparator=<Comparator.GT: 'gt'>, attribute='rating', value=8.5)])\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -258,7 +258,7 @@
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')]) limit=None\n"
|
||||
"query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='genre', value='animated')])\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -277,69 +277,10 @@
|
||||
"retriever.get_relevant_documents(\"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "87513116",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Filter k\n",
|
||||
"\n",
|
||||
"We can also use the self query retriever to specify `k`: the number of documents to fetch.\n",
|
||||
"\n",
|
||||
"We can do this by passing `enable_limit=True` to the constructor."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "73cfca56",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"retriever = SelfQueryRetriever.from_llm(\n",
|
||||
" llm, \n",
|
||||
" vectorstore, \n",
|
||||
" document_content_description, \n",
|
||||
" metadata_field_info, \n",
|
||||
" enable_limit=True,\n",
|
||||
" verbose=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"id": "60110338",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"query='dinosaur' filter=None limit=2\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'year': 1993, 'rating': 7.7, 'genre': 'science fiction'}),\n",
|
||||
" Document(page_content='Toys come alive and have a blast doing so', metadata={'year': 1995, 'genre': 'animated'})]"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# This example only specifies a relevant query\n",
|
||||
"retriever.get_relevant_documents(\"what are two movies about dinosaurs\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "f15d84b3",
|
||||
"id": "60110338",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
|
||||
@@ -295,45 +295,13 @@
|
||||
"retriever.get_relevant_documents(\"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6fe7536c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Filter k\n",
|
||||
"\n",
|
||||
"We can also use the self query retriever to specify `k`: the number of documents to fetch.\n",
|
||||
"\n",
|
||||
"We can do this by passing `enable_limit=True` to the constructor."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "3a2937c2",
|
||||
"id": "69bbd809",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"retriever = SelfQueryRetriever.from_llm(\n",
|
||||
" llm, \n",
|
||||
" vectorstore, \n",
|
||||
" document_content_description, \n",
|
||||
" metadata_field_info, \n",
|
||||
" enable_limit=True,\n",
|
||||
" verbose=True\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "83d233aa",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# This example only specifies a relevant query\n",
|
||||
"retriever.get_relevant_documents(\"What are two movies about dinosaurs\")"
|
||||
]
|
||||
"source": []
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
|
||||
@@ -43,10 +43,10 @@
|
||||
},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"name": "stdin",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"OpenAI API Key:········\n"
|
||||
"OpenAI API Key: ········\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
@@ -59,7 +59,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": 3,
|
||||
"id": "aac9563e",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -74,7 +74,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": 4,
|
||||
"id": "a3c3999a",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -92,7 +92,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"id": "dcf88bdf",
|
||||
"metadata": {
|
||||
"tags": []
|
||||
@@ -108,43 +108,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": null,
|
||||
"id": "a8c513ab",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
|
||||
"docs = vector_db.similarity_search(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"id": "fc516993",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 6,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"docs[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e40d558b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
"source": [
|
||||
"docs[0]"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -163,7 +143,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.12"
|
||||
"version": "3.10.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -274,7 +274,7 @@
|
||||
")\n",
|
||||
"qdrant = Qdrant(\n",
|
||||
" client=client, collection_name=\"my_documents\", \n",
|
||||
" embeddings=embeddings\n",
|
||||
" embedding_function=embeddings.embed_query\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -36,18 +36,10 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"execution_count": null,
|
||||
"id": "d6691489-1ebc-40fa-bc09-b0916903a24d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"OpenAI API Key:········\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import getpass\n",
|
||||
@@ -57,7 +49,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"execution_count": null,
|
||||
"id": "19a71422",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -70,7 +62,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"execution_count": null,
|
||||
"id": "aac9563e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -83,7 +75,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"execution_count": null,
|
||||
"id": "a3c3999a",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -99,7 +91,7 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"execution_count": null,
|
||||
"id": "dcf88bdf",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
@@ -109,7 +101,7 @@
|
||||
" embeddings,\n",
|
||||
" connection_args={\n",
|
||||
" \"uri\": ZILLIZ_CLOUD_URI,\n",
|
||||
" \"user\": ZILLIZ_CLOUD_USERNAME,\n",
|
||||
" \"username\": ZILLIZ_CLOUD_USERNAME,\n",
|
||||
" \"password\": ZILLIZ_CLOUD_PASSWORD,\n",
|
||||
" \"secure\": True\n",
|
||||
" }\n",
|
||||
@@ -118,43 +110,23 @@
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"id": "a8c513ab",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
|
||||
"docs = vector_db.similarity_search(query)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"execution_count": null,
|
||||
"id": "fc516993",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \\n\\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \\n\\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \\n\\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"docs[0].page_content"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "dc85398b",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": []
|
||||
"source": [
|
||||
"docs[0]"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
@@ -173,7 +145,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.12"
|
||||
"version": "3.9.6"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
@@ -1,91 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "91c6a7ef",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Cassandra Chat Message History\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use Cassandra to store chat message history.\n",
|
||||
"\n",
|
||||
"Cassandra is a distributed database that is well suited for storing large amounts of data. \n",
|
||||
"\n",
|
||||
"It is a good choice for storing chat message history because it is easy to scale and can handle a large number of writes.\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "47a601d2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# List of contact points to try connecting to Cassandra cluster.\n",
|
||||
"contact_points = [\"cassandra\"]"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "d15e3302",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.memory import CassandraChatMessageHistory\n",
|
||||
"\n",
|
||||
"message_history = CassandraChatMessageHistory(\n",
|
||||
" contact_points=contact_points, session_id=\"test-session\"\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"message_history.add_user_message(\"hi!\")\n",
|
||||
"\n",
|
||||
"message_history.add_ai_message(\"whats up?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "64fc465e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[HumanMessage(content='hi!', additional_kwargs={}, example=False),\n",
|
||||
" AIMessage(content='whats up?', additional_kwargs={}, example=False)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"message_history.messages"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -1,91 +0,0 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"attachments": {},
|
||||
"cell_type": "markdown",
|
||||
"id": "91c6a7ef",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Mongodb Chat Message History\n",
|
||||
"\n",
|
||||
"This notebook goes over how to use Mongodb to store chat message history.\n",
|
||||
"\n",
|
||||
"MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas.\n",
|
||||
"\n",
|
||||
"MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License (SSPL). - [Wikipedia](https://en.wikipedia.org/wiki/MongoDB)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"id": "47a601d2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Provide the connection string to connect to the MongoDB database\n",
|
||||
"connection_string = \"mongodb://mongo_user:password123@mongo:27017\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"id": "d15e3302",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.memory import MongoDBChatMessageHistory\n",
|
||||
"\n",
|
||||
"message_history = MongoDBChatMessageHistory(\n",
|
||||
" connection_string=connection_string, session_id=\"test-session\"\n",
|
||||
" )\n",
|
||||
"\n",
|
||||
"message_history.add_user_message(\"hi!\")\n",
|
||||
"\n",
|
||||
"message_history.add_ai_message(\"whats up?\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"id": "64fc465e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"[HumanMessage(content='hi!', additional_kwargs={}, example=False),\n",
|
||||
" AIMessage(content='whats up?', additional_kwargs={}, example=False)]"
|
||||
]
|
||||
},
|
||||
"execution_count": 5,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"message_history.messages"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
||||
@@ -93,7 +93,7 @@
|
||||
"from typing import Dict\n",
|
||||
"\n",
|
||||
"from langchain import PromptTemplate, SagemakerEndpoint\n",
|
||||
"from langchain.llms.sagemaker_endpoint import LLMContentHandler\n",
|
||||
"from langchain.llms.sagemaker_endpoint import ContentHandlerBase\n",
|
||||
"from langchain.chains.question_answering import load_qa_chain\n",
|
||||
"import json\n",
|
||||
"\n",
|
||||
@@ -110,7 +110,7 @@
|
||||
" template=prompt_template, input_variables=[\"context\", \"question\"]\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"class ContentHandler(LLMContentHandler):\n",
|
||||
"class ContentHandler(ContentHandlerBase):\n",
|
||||
" content_type = \"application/json\"\n",
|
||||
" accepts = \"application/json\"\n",
|
||||
"\n",
|
||||
|
||||
@@ -6,8 +6,8 @@ First, you should install tracing and set up your environment properly.
|
||||
You can use either a locally hosted version of this (uses Docker) or a cloud hosted version (in closed alpha).
|
||||
If you're interested in using the hosted platform, please fill out the form [here](https://forms.gle/tRCEMSeopZf6TE3b6).
|
||||
|
||||
- [Locally Hosted Setup](../tracing/local_installation.md)
|
||||
- [Cloud Hosted Setup](../tracing/hosted_installation.md)
|
||||
- [Locally Hosted Setup](./tracing/local_installation.md)
|
||||
- [Cloud Hosted Setup](./tracing/hosted_installation.md)
|
||||
|
||||
## Tracing Walkthrough
|
||||
|
||||
@@ -17,32 +17,32 @@ A session is just a way to group traces together.
|
||||
If you click on a session, it will take you to a page with no recorded traces that says "No Runs."
|
||||
You can create a new session with the new session form.
|
||||
|
||||

|
||||

|
||||
|
||||
If we click on the `default` session, we can see that to start we have no traces stored.
|
||||
|
||||

|
||||

|
||||
|
||||
If we now start running chains and agents with tracing enabled, we will see data show up here.
|
||||
To do so, we can run [this notebook](../tracing/agent_with_tracing.ipynb) as an example.
|
||||
To do so, we can run [this notebook](tracing/agent_with_tracing.ipynb) as an example.
|
||||
After running it, we will see an initial trace show up.
|
||||
|
||||

|
||||

|
||||
|
||||
From here we can explore the trace at a high level by clicking on the arrow to show nested runs.
|
||||
We can keep on clicking further and further down to explore deeper and deeper.
|
||||
|
||||

|
||||

|
||||
|
||||
We can also click on the "Explore" button of the top level run to dive even deeper.
|
||||
Here, we can see the inputs and outputs in full, as well as all the nested traces.
|
||||
|
||||

|
||||

|
||||
|
||||
We can keep on exploring each of these nested traces in more detail.
|
||||
For example, here is the lowest level trace with the exact inputs/outputs to the LLM.
|
||||
|
||||

|
||||

|
||||
|
||||
## Changing Sessions
|
||||
|
||||
@@ -207,7 +207,7 @@
|
||||
"from langchain.chat_models import ChatOpenAI\n",
|
||||
"from langchain.chains import ConversationalRetrievalChain\n",
|
||||
"\n",
|
||||
"model = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'\n",
|
||||
"model = ChatOpenAI(model='gpt-3.5-turbo') # switch to 'gpt-4'\n",
|
||||
"qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)"
|
||||
]
|
||||
},
|
||||
|
||||
@@ -55,7 +55,7 @@ See `this notebook <./evaluation/qa_generation.html>`_ for an example of how to
|
||||
We have two solutions to the lack of metrics.
|
||||
|
||||
The first solution is to use no metrics, and rather just rely on looking at results by eye to get a sense for how the chain/agent is performing.
|
||||
To assist in this, we have developed (and will continue to develop) `tracing <../additional_resources/tracing.html>`_, a UI-based visualizer of your chain and agent runs.
|
||||
To assist in this, we have developed (and will continue to develop) `tracing <../tracing.html>`_, a UI-based visualizer of your chain and agent runs.
|
||||
|
||||
The second solution we recommend is to use Language Models themselves to evaluate outputs.
|
||||
For this we have a few different chains and prompts aimed at tackling this issue.
|
||||
|
||||
@@ -213,7 +213,7 @@
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"chain = SQLDatabaseChain.from_llm(llm, db, input_key=\"question\")"
|
||||
"chain = SQLDatabaseChain(llm=llm, database=db, input_key=\"question\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
@@ -415,7 +415,7 @@
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.3"
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
|
||||
116
docs/youtube.md
Normal file
116
docs/youtube.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# YouTube
|
||||
|
||||
This is a collection of `LangChain` tutorials and videos on `YouTube`.
|
||||
|
||||
### Introduction to LangChain with Harrison Chase, creator of LangChain
|
||||
- [Building the Future with LLMs, `LangChain`, & `Pinecone`](https://youtu.be/nMniwlGyX-c) by [Pinecone](https://www.youtube.com/@pinecone-io)
|
||||
- [LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36](https://youtu.be/lhby7Ql7hbk) by [Weaviate • Vector Database](https://www.youtube.com/@Weaviate)
|
||||
- [LangChain Demo + Q&A with Harrison Chase](https://youtu.be/zaYTXQFR0_s?t=788) by [Full Stack Deep Learning](https://www.youtube.com/@FullStackDeepLearning)
|
||||
- [LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin)](https://youtu.be/gVkF8cwfBLI) by [Chat with data](https://www.youtube.com/@chatwithdata)
|
||||
|
||||
## Tutorials
|
||||
|
||||
- [LangChain Crash Course: Build an AutoGPT app in 25 minutes!](https://youtu.be/MlK6SIjcjE8) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
|
||||
|
||||
- [LangChain Crash Course - Build apps with language models](https://youtu.be/LbT1yp6quS8) by [Patrick Loeber](https://www.youtube.com/@patloeber)
|
||||
|
||||
- [LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners](https://youtu.be/aywZrzNaKjs) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
|
||||
|
||||
- [LangChain for Gen AI and LLMs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F) by [James Briggs](https://www.youtube.com/@jamesbriggs):
|
||||
- #1 [Getting Started with `GPT-3` vs. Open Source LLMs](https://youtu.be/nE2skSRWTTs)
|
||||
- #2 [Prompt Templates for `GPT 3.5` and other LLMs](https://youtu.be/RflBcK0oDH0)
|
||||
- #3 [LLM Chains using `GPT 3.5` and other LLMs](https://youtu.be/S8j9Tk0lZHU)
|
||||
- #4 [Chatbot Memory for `Chat-GPT`, `Davinci` + other LLMs](https://youtu.be/X05uK0TZozM)
|
||||
- #5 [Chat with OpenAI in LangChain](https://youtu.be/CnAgB3A5OlU)
|
||||
- #6 [LangChain Agents Deep Dive with `GPT 3.5`](https://youtu.be/jSP-gSEyVeI)
|
||||
- [Prompt Engineering with OpenAI's `GPT-3` and other LLMs](https://youtu.be/BP9fi_0XTlw)
|
||||
|
||||
- [LangChain 101](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5) by [Data Independent](https://www.youtube.com/@DataIndependent):
|
||||
- [What Is LangChain? - LangChain + `ChatGPT` Overview](https://youtu.be/_v_fgW2SkkQ)
|
||||
- [Quickstart Guide](https://youtu.be/kYRB-vJFy38)
|
||||
- [Beginner Guide To 7 Essential Concepts](https://youtu.be/2xxziIWmaSA)
|
||||
- [`OpenAI` + `Wolfram Alpha`](https://youtu.be/UijbzCIJ99g)
|
||||
- [Ask Questions On Your Custom (or Private) Files](https://youtu.be/EnT-ZTrcPrg)
|
||||
- [Connect `Google Drive Files` To `OpenAI`](https://youtu.be/IqqHqDcXLww)
|
||||
- [`YouTube Transcripts` + `OpenAI`](https://youtu.be/pNcQ5XXMgH4)
|
||||
- [Question A 300 Page Book (w/ `OpenAI` + `Pinecone`)](https://youtu.be/h0DHDp1FbmQ)
|
||||
- [Workaround `OpenAI's` Token Limit With Chain Types](https://youtu.be/f9_BWhCI4Zo)
|
||||
- [Build Your Own OpenAI + LangChain Web App in 23 Minutes](https://youtu.be/U_eV8wfMkXU)
|
||||
- [Working With The New `ChatGPT API`](https://youtu.be/e9P7FLi5Zy8)
|
||||
- [OpenAI + LangChain Wrote Me 100 Custom Sales Emails](https://youtu.be/y1pyAQM-3Bo)
|
||||
- [Structured Output From `OpenAI` (Clean Dirty Data)](https://youtu.be/KwAXfey-xQk)
|
||||
- [Connect `OpenAI` To +5,000 Tools (LangChain + `Zapier`)](https://youtu.be/7tNm0yiDigU)
|
||||
- [Use LLMs To Extract Data From Text (Expert Mode)](https://youtu.be/xZzvwR9jdPA)
|
||||
|
||||
- [LangChain How to and guides](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ) by [Sam Witteveen](https://www.youtube.com/@samwitteveenai):
|
||||
- [LangChain Basics - LLMs & PromptTemplates with Colab](https://youtu.be/J_0qvRt4LNk)
|
||||
- [LangChain Basics - Tools and Chains](https://youtu.be/hI2BY7yl_Ac)
|
||||
- [`ChatGPT API` Announcement & Code Walkthrough with LangChain](https://youtu.be/phHqvLHCwH4)
|
||||
- [Conversations with Memory (explanation & code walkthrough)](https://youtu.be/X550Zbz_ROE)
|
||||
- [Chat with `Flan20B`](https://youtu.be/VW5LBavIfY4)
|
||||
- [Using `Hugging Face Models` locally (code walkthrough)](https://youtu.be/Kn7SX2Mx_Jk)
|
||||
- [`PAL` : Program-aided Language Models with LangChain code](https://youtu.be/dy7-LvDu-3s)
|
||||
- [Building a Summarization System with LangChain and `GPT-3` - Part 1](https://youtu.be/LNq_2s_H01Y)
|
||||
- [Building a Summarization System with LangChain and `GPT-3` - Part 2](https://youtu.be/d-yeHDLgKHw)
|
||||
- [Microsoft's `Visual ChatGPT` using LangChain](https://youtu.be/7YEiEyfPF5U)
|
||||
- [LangChain Agents - Joining Tools and Chains with Decisions](https://youtu.be/ziu87EXZVUE)
|
||||
- [Comparing LLMs with LangChain](https://youtu.be/rFNG0MIEuW0)
|
||||
- [Using `Constitutional AI` in LangChain](https://youtu.be/uoVqNFDwpX4)
|
||||
- [Talking to `Alpaca` with LangChain - Creating an Alpaca Chatbot](https://youtu.be/v6sF8Ed3nTE)
|
||||
- [Talk to your `CSV` & `Excel` with LangChain](https://youtu.be/xQ3mZhw69bc)
|
||||
- [`BabyAGI`: Discover the Power of Task-Driven Autonomous Agents!](https://youtu.be/QBcDLSE2ERA)
|
||||
- [Improve your `BabyAGI` with LangChain](https://youtu.be/DRgPyOXZ-oE)
|
||||
|
||||
- [LangChain](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr) by [Prompt Engineering](https://www.youtube.com/@engineerprompt):
|
||||
- [LangChain Crash Course — All You Need to Know to Build Powerful Apps with LLMs](https://youtu.be/5-fc4Tlgmro)
|
||||
- [Working with MULTIPLE `PDF` Files in LangChain: `ChatGPT` for your Data](https://youtu.be/s5LhRdh5fu4)
|
||||
- [`ChatGPT` for YOUR OWN `PDF` files with LangChain](https://youtu.be/TLf90ipMzfE)
|
||||
- [Talk to YOUR DATA without OpenAI APIs: LangChain](https://youtu.be/wrD-fZvT6UI)
|
||||
|
||||
- LangChain by [Chat with data](https://www.youtube.com/@chatwithdata)
|
||||
- [LangChain Beginner's Tutorial for `Typescript`/`Javascript`](https://youtu.be/bH722QgRlhQ)
|
||||
- [`GPT-4` Tutorial: How to Chat With Multiple `PDF` Files (~1000 pages of Tesla's 10-K Annual Reports)](https://youtu.be/Ix9WIZpArm0)
|
||||
- [`GPT-4` & LangChain Tutorial: How to Chat With A 56-Page `PDF` Document (w/`Pinecone`)](https://youtu.be/ih9PBGVVOO4)
|
||||
|
||||
- [Get SH\*T Done with Prompt Engineering and LangChain](https://www.youtube.com/watch?v=muXbPpG_ys4&list=PLEJK-H61Xlwzm5FYLDdKt_6yibO33zoMW) by [Venelin Valkov](https://www.youtube.com/@venelin_valkov)
|
||||
- [Getting Started with LangChain: Load Custom Data, Run OpenAI Models, Embeddings and `ChatGPT`](https://www.youtube.com/watch?v=muXbPpG_ys4)
|
||||
- [Loaders, Indexes & Vectorstores in LangChain: Question Answering on `PDF` files with `ChatGPT`](https://www.youtube.com/watch?v=FQnvfR8Dmr0)
|
||||
- [LangChain Models: `ChatGPT`, `Flan Alpaca`, `OpenAI Embeddings`, Prompt Templates & Streaming](https://www.youtube.com/watch?v=zy6LiK5F5-s)
|
||||
- [LangChain Chains: Use `ChatGPT` to Build Conversational Agents, Summaries and Q&A on Text With LLMs](https://www.youtube.com/watch?v=h1tJZQPcimM)
|
||||
- [Analyze Custom CSV Data with `GPT-4` using Langchain](https://www.youtube.com/watch?v=Ew3sGdX8at4)
|
||||
|
||||
## Videos (sorted by views)
|
||||
|
||||
- [Building AI LLM Apps with LangChain (and more?) - LIVE STREAM](https://www.youtube.com/live/M-2Cj_2fzWI?feature=share) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
|
||||
- [First look - `ChatGPT` + `WolframAlpha` (`GPT-3.5` and Wolfram|Alpha via LangChain by James Weaver)](https://youtu.be/wYGbY811oMo) by [Dr Alan D. Thompson](https://www.youtube.com/@DrAlanDThompson)
|
||||
- [LangChain explained - The hottest new Python framework](https://youtu.be/RoR4XJw8wIc) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
|
||||
- [Chatbot with INFINITE MEMORY using `OpenAI` & `Pinecone` - `GPT-3`, `Embeddings`, `ADA`, `Vector DB`, `Semantic`](https://youtu.be/2xNzB7xq8nk) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
|
||||
- [LangChain for LLMs is... basically just an Ansible playbook](https://youtu.be/X51N9C-OhlE) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
|
||||
- [Build your own LLM Apps with LangChain & `GPT-Index`](https://youtu.be/-75p09zFUJY) by [1littlecoder](https://www.youtube.com/@1littlecoder)
|
||||
- [`BabyAGI` - New System of Autonomous AI Agents with LangChain](https://youtu.be/lg3kJvf1kXo) by [1littlecoder](https://www.youtube.com/@1littlecoder)
|
||||
- [Run `BabyAGI` with Langchain Agents (with Python Code)](https://youtu.be/WosPGHPObx8) by [1littlecoder](https://www.youtube.com/@1littlecoder)
|
||||
- [How to Use Langchain With `Zapier` | Write and Send Email with GPT-3 | OpenAI API Tutorial](https://youtu.be/p9v2-xEa9A0) by [StarMorph AI](https://www.youtube.com/@starmorph)
|
||||
- [Use Your Locally Stored Files To Get Response From GPT - `OpenAI` | Langchain | Python](https://youtu.be/NC1Ni9KS-rk) by [Shweta Lodha](https://www.youtube.com/@shweta-lodha)
|
||||
- [`Langchain JS` | How to Use GPT-3, GPT-4 to Reference your own Data | `OpenAI Embeddings` Intro](https://youtu.be/veV2I-NEjaM) by [StarMorph AI](https://www.youtube.com/@starmorph)
|
||||
- [The easiest way to work with large language models | Learn LangChain in 10min](https://youtu.be/kmbS6FDQh7c) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
|
||||
- [4 Autonomous AI Agents: “Westworld” simulation `BabyAGI`, `AutoGPT`, `Camel`, `LangChain`](https://youtu.be/yWbnH6inT_U) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
|
||||
- [AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT](https://youtu.be/J-GL0htqda8) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
|
||||
- [Query Your Data with GPT-4 | Embeddings, Vector Databases | Langchain JS Knowledgebase](https://youtu.be/jRnUPUTkZmU) by [StarMorph AI](https://www.youtube.com/@starmorph)
|
||||
- [`Weaviate` + LangChain for LLM apps presented by Erika Cardenas](https://youtu.be/7AGj4Td5Lgw) by [`Weaviate` • Vector Database](https://www.youtube.com/@Weaviate)
|
||||
- [Langchain Overview — How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
|
||||
- [Langchain Overview - How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
|
||||
- [Custom langchain Agent & Tools with memory. Turn any `Python function` into langchain tool with Gpt 3](https://youtu.be/NIG8lXk0ULg) by [echohive](https://www.youtube.com/@echohive)
|
||||
- [LangChain: Run Language Models Locally - `Hugging Face Models`](https://youtu.be/Xxxuw4_iCzw) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
|
||||
- [`ChatGPT` with any `YouTube` video using langchain and `chromadb`](https://youtu.be/TQZfB2bzVwU) by [echohive](https://www.youtube.com/@echohive)
|
||||
- [How to Talk to a `PDF` using LangChain and `ChatGPT`](https://youtu.be/v2i1YDtrIwk) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
|
||||
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@merksworld)
|
||||
- [LangChain - Prompt Templates (what all the best prompt engineers use)](https://youtu.be/1aRu8b0XNOQ) by [Nick Daigler](https://www.youtube.com/@nick_daigs)
|
||||
- [LangChain. Crear aplicaciones Python impulsadas por GPT](https://youtu.be/DkW_rDndts8) by [Jesús Conde](https://www.youtube.com/@0utKast)
|
||||
- [Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial](https://youtu.be/fLy0VenZyGc) by [Rachel Woods](https://www.youtube.com/@therachelwoods)
|
||||
- [`BabyAGI` + `GPT-4` Langchain Agent with Internet Access](https://youtu.be/wx1z_hs5P6E) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
|
||||
- [Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI](https://youtu.be/mb_YAABSplk) by [Arnoldas Kemeklis](https://www.youtube.com/@processusAI)
|
||||
- [Get Started with LangChain in `Node.js`](https://youtu.be/Wxx1KUWJFv4) by [Developers Digest](https://www.youtube.com/@DevelopersDigest)
|
||||
- [LangChain + `OpenAI` tutorial: Building a Q&A system w/ own text data](https://youtu.be/DYOU_Z0hAwo) by [Samuel Chan](https://www.youtube.com/@SamuelChan)
|
||||
- [Langchain + `Zapier` Agent](https://youtu.be/yribLAb-pxA) by [Merk](https://www.youtube.com/@merksworld)
|
||||
- [Connecting the Internet with `ChatGPT` (LLMs) using Langchain And Answers Your Questions](https://youtu.be/9Y0TBC63yZg) by [Kamalraj M M](https://www.youtube.com/@insightbuilder)
|
||||
- [Build More Powerful LLM Applications for Business’s with LangChain (Beginners Guide)](https://youtu.be/sp3-WLKEcBg) by[ No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
|
||||
@@ -62,7 +62,6 @@ except metadata.PackageNotFoundError:
|
||||
del metadata # optional, avoids polluting the results of dir(__package__)
|
||||
|
||||
verbose: bool = False
|
||||
debug: bool = False
|
||||
llm_cache: Optional[BaseCache] = None
|
||||
|
||||
# For backwards compatibility
|
||||
|
||||
@@ -29,7 +29,7 @@ DELETE /users/{{id}}/cart to delete a user's cart
|
||||
User query: tell me a joke
|
||||
Plan: Sorry, this API's domain is shopping, not comedy.
|
||||
|
||||
User query: I want to buy a couch
|
||||
Usery query: I want to buy a couch
|
||||
Plan: 1. GET /products with a query param to search for couches
|
||||
2. GET /user to find the user's id
|
||||
3. POST /users/{{id}}/cart to add a couch to the user's cart
|
||||
|
||||
@@ -3,12 +3,14 @@
|
||||
|
||||
|
||||
POWERBI_PREFIX = """You are an agent designed to interact with a Power BI Dataset.
|
||||
Given an input question, create a syntactically correct DAX query to run, then look at the results of the query and return the answer.
|
||||
Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.
|
||||
You can order the results by a relevant column to return the most interesting examples in the database.
|
||||
Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
|
||||
|
||||
Assistant has access to tools that can give context, write queries and execute those queries against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "I don't know" as the answer. The query language that PowerBI uses is called DAX and it is quite particular and complex, so make sure to use the right tools to get the answers the user is looking for.
|
||||
You have access to tools for interacting with the Power BI Dataset. Only use the below tools. Only use the information returned by the below tools to construct your final answer. Usually I should first ask which tables I have, then how each table is defined and then ask the question to query tool to create a query for me and then I should ask the query tool to execute it, finally create a nice sentence that answers the question. If you receive an error back that mentions that the query was wrong try to phrase the question differently and get a new query from the question to query tool.
|
||||
|
||||
Given an input question, create a syntactically correct DAX query to run, then look at the results and return the answer. Sometimes the result indicate something is wrong with the query, or there were errors in the json serialization. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
|
||||
|
||||
Assistant never just starts querying, assistant should first find out which tables there are, then how each table is defined and then ask the question to query tool to create a query and then ask the query tool to execute it, finally create a complete sentence that answers the question, if multiple rows need are asked find a way to write that in a easily readible format for a human. Assistant has tools that can get more context of the tables which helps it write correct queries.
|
||||
If the question does not seem related to the dataset, just return "I don't know" as the answer.
|
||||
"""
|
||||
|
||||
POWERBI_SUFFIX = """Begin!
|
||||
@@ -17,13 +19,17 @@ Question: {input}
|
||||
Thought: I should first ask which tables I have, then how each table is defined and then ask the question to query tool to create a query for me and then I should ask the query tool to execute it, finally create a nice sentence that answers the question.
|
||||
{agent_scratchpad}"""
|
||||
|
||||
POWERBI_CHAT_PREFIX = """Assistant is a large language model built to help users interact with a PowerBI Dataset.
|
||||
POWERBI_CHAT_PREFIX = """Assistant is a large language model trained by OpenAI built to help users interact with a PowerBI Dataset.
|
||||
|
||||
Assistant has access to tools that can give context, write queries and execute those queries against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "I don't know" as the answer. The query language that PowerBI uses is called DAX and it is quite particular and complex, so make sure to use the right tools to get the answers the user is looking for.
|
||||
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
|
||||
|
||||
Given an input question, create a syntactically correct DAX query to run, then look at the results and return the answer. Sometimes the result indicate something is wrong with the query, or there were errors in the json serialization. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
|
||||
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
|
||||
|
||||
Assistant never just starts querying, assistant should first find out which tables there are, then how each table is defined and then ask the question to query tool to create a query and then ask the query tool to execute it, finally create a complete sentence that answers the question, if multiple rows need are asked find a way to write that in a easily readible format for a human. Assistant has tools that can get more context of the tables which helps it write correct queries.
|
||||
Given an input question, create a syntactically correct DAX query to run, then look at the results of the query and return the answer. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
|
||||
|
||||
Overall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
|
||||
|
||||
Usually I should first ask which tables I have, then how each table is defined and then ask the question to query tool to create a query for me and then I should ask the query tool to execute it, finally create a complete sentence that answers the question. If you receive an error back that mentions that the query was wrong try to phrase the question differently and get a new query from the question to query tool.
|
||||
"""
|
||||
|
||||
POWERBI_CHAT_SUFFIX = """TOOLS
|
||||
|
||||
@@ -20,7 +20,6 @@ from langchain.tools.ddg_search.tool import DuckDuckGoSearchRun
|
||||
from langchain.tools.google_search.tool import GoogleSearchResults, GoogleSearchRun
|
||||
from langchain.tools.metaphor_search.tool import MetaphorSearchResults
|
||||
from langchain.tools.google_serper.tool import GoogleSerperResults, GoogleSerperRun
|
||||
from langchain.tools.graphql.tool import BaseGraphQLTool
|
||||
from langchain.tools.human.tool import HumanInputRun
|
||||
from langchain.tools.python.tool import PythonREPLTool
|
||||
from langchain.tools.requests.tool import (
|
||||
@@ -43,7 +42,6 @@ from langchain.utilities.google_search import GoogleSearchAPIWrapper
|
||||
from langchain.utilities.google_serper import GoogleSerperAPIWrapper
|
||||
from langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper
|
||||
from langchain.utilities.awslambda import LambdaWrapper
|
||||
from langchain.utilities.graphql import GraphQLAPIWrapper
|
||||
from langchain.utilities.searx_search import SearxSearchWrapper
|
||||
from langchain.utilities.serpapi import SerpAPIWrapper
|
||||
from langchain.utilities.wikipedia import WikipediaAPIWrapper
|
||||
@@ -247,12 +245,6 @@ def _get_scenexplain(**kwargs: Any) -> BaseTool:
|
||||
return SceneXplainTool(**kwargs)
|
||||
|
||||
|
||||
def _get_graphql_tool(**kwargs: Any) -> BaseTool:
|
||||
graphql_endpoint = kwargs["graphql_endpoint"]
|
||||
wrapper = GraphQLAPIWrapper(graphql_endpoint=graphql_endpoint)
|
||||
return BaseGraphQLTool(graphql_wrapper=wrapper)
|
||||
|
||||
|
||||
def _get_openweathermap(**kwargs: Any) -> BaseTool:
|
||||
return OpenWeatherMapQueryRun(api_wrapper=OpenWeatherMapAPIWrapper(**kwargs))
|
||||
|
||||
@@ -298,7 +290,6 @@ _EXTRA_OPTIONAL_TOOLS: Dict[str, Tuple[Callable[[KwArg(Any)], BaseTool], List[st
|
||||
["awslambda_tool_name", "awslambda_tool_description", "function_name"],
|
||||
),
|
||||
"sceneXplain": (_get_scenexplain, []),
|
||||
"graphql": (_get_graphql_tool, ["graphql_endpoint"]),
|
||||
"openweathermap-api": (_get_openweathermap, ["openweathermap_api_key"]),
|
||||
}
|
||||
|
||||
|
||||
@@ -10,7 +10,6 @@ from contextvars import ContextVar
|
||||
from typing import Any, Dict, Generator, List, Optional, Type, TypeVar, Union, cast
|
||||
from uuid import UUID, uuid4
|
||||
|
||||
import langchain
|
||||
from langchain.callbacks.base import (
|
||||
BaseCallbackHandler,
|
||||
BaseCallbackManager,
|
||||
@@ -24,7 +23,6 @@ from langchain.callbacks.stdout import StdOutCallbackHandler
|
||||
from langchain.callbacks.tracers.langchain import LangChainTracer
|
||||
from langchain.callbacks.tracers.langchain_v1 import LangChainTracerV1, TracerSessionV1
|
||||
from langchain.callbacks.tracers.schemas import TracerSession
|
||||
from langchain.callbacks.tracers.stdout import ConsoleCallbackHandler
|
||||
from langchain.schema import (
|
||||
AgentAction,
|
||||
AgentFinish,
|
||||
@@ -51,10 +49,6 @@ tracing_v2_callback_var: ContextVar[
|
||||
)
|
||||
|
||||
|
||||
def _get_debug() -> bool:
|
||||
return langchain.debug
|
||||
|
||||
|
||||
@contextmanager
|
||||
def get_openai_callback() -> Generator[OpenAICallbackHandler, None, None]:
|
||||
"""Get OpenAI callback handler in a context manager."""
|
||||
@@ -159,7 +153,7 @@ async def _ahandle_event_for_handler(
|
||||
message_strings = [get_buffer_string(m) for m in args[1]]
|
||||
await _ahandle_event_for_handler(
|
||||
handler,
|
||||
"on_llm_start",
|
||||
"on_llm",
|
||||
"ignore_llm",
|
||||
args[0],
|
||||
message_strings,
|
||||
@@ -843,29 +837,14 @@ def _configure(
|
||||
os.environ.get("LANGCHAIN_TRACING_V2") is not None or tracer_v2 is not None
|
||||
)
|
||||
tracer_session = os.environ.get("LANGCHAIN_SESSION")
|
||||
debug = _get_debug()
|
||||
if tracer_session is None:
|
||||
tracer_session = "default"
|
||||
if (
|
||||
verbose
|
||||
or debug
|
||||
or tracing_enabled_
|
||||
or tracing_v2_enabled_
|
||||
or open_ai is not None
|
||||
):
|
||||
if verbose or tracing_enabled_ or tracing_v2_enabled_ or open_ai is not None:
|
||||
if verbose and not any(
|
||||
isinstance(handler, StdOutCallbackHandler)
|
||||
for handler in callback_manager.handlers
|
||||
):
|
||||
if debug:
|
||||
pass
|
||||
else:
|
||||
callback_manager.add_handler(StdOutCallbackHandler(), False)
|
||||
if debug and not any(
|
||||
isinstance(handler, ConsoleCallbackHandler)
|
||||
for handler in callback_manager.handlers
|
||||
):
|
||||
callback_manager.add_handler(ConsoleCallbackHandler(), True)
|
||||
callback_manager.add_handler(StdOutCallbackHandler(), False)
|
||||
if tracing_enabled_ and not any(
|
||||
isinstance(handler, LangChainTracerV1)
|
||||
for handler in callback_manager.handlers
|
||||
|
||||
@@ -2,6 +2,5 @@
|
||||
|
||||
from langchain.callbacks.tracers.langchain import LangChainTracer
|
||||
from langchain.callbacks.tracers.langchain_v1 import LangChainTracerV1
|
||||
from langchain.callbacks.tracers.stdout import ConsoleCallbackHandler
|
||||
|
||||
__all__ = ["LangChainTracer", "LangChainTracerV1", "ConsoleCallbackHandler"]
|
||||
__all__ = ["LangChainTracer", "LangChainTracerV1"]
|
||||
|
||||
@@ -56,11 +56,7 @@ class BaseTracer(BaseCallbackHandler, ABC):
|
||||
raise TracerException(
|
||||
f"Parent run with UUID {run.parent_run_id} not found."
|
||||
)
|
||||
if (
|
||||
run.child_execution_order is not None
|
||||
and parent_run.child_execution_order is not None
|
||||
and run.child_execution_order > parent_run.child_execution_order
|
||||
):
|
||||
if run.child_execution_order > parent_run.child_execution_order:
|
||||
parent_run.child_execution_order = run.child_execution_order
|
||||
self.run_map.pop(str(run.id))
|
||||
|
||||
@@ -72,10 +68,6 @@ class BaseTracer(BaseCallbackHandler, ABC):
|
||||
parent_run = self.run_map.get(parent_run_id)
|
||||
if parent_run is None:
|
||||
raise TracerException(f"Parent run with UUID {parent_run_id} not found.")
|
||||
if parent_run.child_execution_order is None:
|
||||
raise TracerException(
|
||||
f"Parent run with UUID {parent_run_id} has no child execution order."
|
||||
)
|
||||
|
||||
return parent_run.child_execution_order + 1
|
||||
|
||||
|
||||
@@ -8,7 +8,6 @@ from typing import Any, Dict, List, Optional
|
||||
from uuid import UUID
|
||||
|
||||
import requests
|
||||
from tenacity import retry, stop_after_attempt, wait_fixed
|
||||
|
||||
from langchain.callbacks.tracers.base import BaseTracer
|
||||
from langchain.callbacks.tracers.schemas import (
|
||||
@@ -34,7 +33,6 @@ def get_endpoint() -> str:
|
||||
return os.getenv("LANGCHAIN_ENDPOINT", "http://localhost:8000")
|
||||
|
||||
|
||||
@retry(stop=stop_after_attempt(3), wait=wait_fixed(0.5))
|
||||
def _get_tenant_id(
|
||||
tenant_id: Optional[str], endpoint: Optional[str], headers: Optional[dict]
|
||||
) -> str:
|
||||
@@ -108,7 +106,6 @@ class LangChainTracer(BaseTracer):
|
||||
self.tenant_id = tenant_id
|
||||
return tenant_id
|
||||
|
||||
@retry(stop=stop_after_attempt(3), wait=wait_fixed(0.5))
|
||||
def ensure_session(self) -> TracerSession:
|
||||
"""Upsert a session."""
|
||||
if self.session is not None:
|
||||
|
||||
@@ -8,7 +8,6 @@ from uuid import UUID
|
||||
|
||||
from pydantic import BaseModel, Field, root_validator
|
||||
|
||||
from langchain.env import get_runtime_environment
|
||||
from langchain.schema import LLMResult
|
||||
|
||||
|
||||
@@ -23,6 +22,8 @@ class TracerSessionV1Base(BaseModel):
|
||||
class TracerSessionV1Create(TracerSessionV1Base):
|
||||
"""Create class for TracerSessionV1."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class TracerSessionV1(TracerSessionV1Base):
|
||||
"""TracerSessionV1 schema."""
|
||||
@@ -108,7 +109,7 @@ class RunBase(BaseModel):
|
||||
extra: dict
|
||||
error: Optional[str]
|
||||
execution_order: int
|
||||
child_execution_order: Optional[int]
|
||||
child_execution_order: int
|
||||
serialized: dict
|
||||
inputs: dict
|
||||
outputs: Optional[dict]
|
||||
@@ -135,14 +136,6 @@ class RunCreate(RunBase):
|
||||
name: str
|
||||
session_id: UUID
|
||||
|
||||
@root_validator(pre=True)
|
||||
def add_runtime_env(cls, values: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Add env info to the run."""
|
||||
extra = values.get("extra", {})
|
||||
extra["runtime"] = get_runtime_environment()
|
||||
values["extra"] = extra
|
||||
return values
|
||||
|
||||
|
||||
ChainRun.update_forward_refs()
|
||||
ToolRun.update_forward_refs()
|
||||
|
||||
@@ -1,130 +0,0 @@
|
||||
import json
|
||||
from typing import Any, List
|
||||
|
||||
from langchain.callbacks.tracers.base import BaseTracer
|
||||
from langchain.callbacks.tracers.schemas import Run
|
||||
from langchain.input import get_colored_text
|
||||
|
||||
|
||||
def try_json_stringify(obj: Any, fallback: str) -> str:
|
||||
try:
|
||||
return json.dumps(obj, indent=2)
|
||||
except Exception:
|
||||
return fallback
|
||||
|
||||
|
||||
def elapsed(run: Any) -> str:
|
||||
elapsed_time = run.end_time - run.start_time
|
||||
milliseconds = elapsed_time.total_seconds() * 1000
|
||||
if milliseconds < 1000:
|
||||
return f"{milliseconds}ms"
|
||||
return f"{(milliseconds / 1000):.2f}s"
|
||||
|
||||
|
||||
class ConsoleCallbackHandler(BaseTracer):
|
||||
name = "console_callback_handler"
|
||||
|
||||
def _persist_run(self, run: Run) -> None:
|
||||
pass
|
||||
|
||||
def get_parents(self, run: Run) -> List[Run]:
|
||||
parents = []
|
||||
current_run = run
|
||||
while current_run.parent_run_id:
|
||||
parent = self.run_map.get(str(current_run.parent_run_id))
|
||||
if parent:
|
||||
parents.append(parent)
|
||||
current_run = parent
|
||||
else:
|
||||
break
|
||||
return parents
|
||||
|
||||
def get_breadcrumbs(self, run: Run) -> str:
|
||||
parents = self.get_parents(run)[::-1]
|
||||
string = " > ".join(
|
||||
f"{parent.execution_order}:{parent.run_type}:{parent.name}"
|
||||
if i != len(parents) - 1
|
||||
else f"{parent.execution_order}:{parent.run_type}:{parent.name}"
|
||||
for i, parent in enumerate(parents + [run])
|
||||
)
|
||||
return string
|
||||
|
||||
# logging methods
|
||||
def _on_chain_start(self, run: Run) -> None:
|
||||
crumbs = self.get_breadcrumbs(run)
|
||||
print(
|
||||
f"{get_colored_text('[chain/start]', color='green')} "
|
||||
f"[{crumbs}] Entering Chain run with input:\n"
|
||||
f"{try_json_stringify(run.inputs, '[inputs]')}"
|
||||
)
|
||||
|
||||
def _on_chain_end(self, run: Run) -> None:
|
||||
crumbs = self.get_breadcrumbs(run)
|
||||
print(
|
||||
f"{get_colored_text('[chain/end]', color='blue')} "
|
||||
f"[{crumbs}] [{elapsed(run)}] Exiting Chain run with output:\n"
|
||||
f"{try_json_stringify(run.outputs, '[outputs]')}"
|
||||
)
|
||||
|
||||
def _on_chain_error(self, run: Run) -> None:
|
||||
crumbs = self.get_breadcrumbs(run)
|
||||
print(
|
||||
f"{get_colored_text('[chain/error]', color='red')} "
|
||||
f"[{crumbs}] [{elapsed(run)}] Chain run errored with error:\n"
|
||||
f"{try_json_stringify(run.error, '[error]')}"
|
||||
)
|
||||
|
||||
def _on_llm_start(self, run: Run) -> None:
|
||||
crumbs = self.get_breadcrumbs(run)
|
||||
inputs = (
|
||||
{"prompts": [p.strip() for p in run.inputs["prompts"]]}
|
||||
if "prompts" in run.inputs
|
||||
else run.inputs
|
||||
)
|
||||
print(
|
||||
f"{get_colored_text('[llm/start]', color='green')} "
|
||||
f"[{crumbs}] Entering LLM run with input:\n"
|
||||
f"{try_json_stringify(inputs, '[inputs]')}"
|
||||
)
|
||||
|
||||
def _on_llm_end(self, run: Run) -> None:
|
||||
crumbs = self.get_breadcrumbs(run)
|
||||
print(
|
||||
f"{get_colored_text('[llm/end]', color='blue')} "
|
||||
f"[{crumbs}] [{elapsed(run)}] Exiting LLM run with output:\n"
|
||||
f"{try_json_stringify(run.outputs, '[response]')}"
|
||||
)
|
||||
|
||||
def _on_llm_error(self, run: Run) -> None:
|
||||
crumbs = self.get_breadcrumbs(run)
|
||||
print(
|
||||
f"{get_colored_text('[llm/error]', color='red')} "
|
||||
f"[{crumbs}] [{elapsed(run)}] LLM run errored with error:\n"
|
||||
f"{try_json_stringify(run.error, '[error]')}"
|
||||
)
|
||||
|
||||
def _on_tool_start(self, run: Run) -> None:
|
||||
crumbs = self.get_breadcrumbs(run)
|
||||
print(
|
||||
f'{get_colored_text("[tool/start]", color="green")} '
|
||||
f"[{crumbs}] Entering Tool run with input:\n"
|
||||
f'"{run.inputs["input"].strip()}"'
|
||||
)
|
||||
|
||||
def _on_tool_end(self, run: Run) -> None:
|
||||
crumbs = self.get_breadcrumbs(run)
|
||||
if run.outputs:
|
||||
print(
|
||||
f'{get_colored_text("[tool/end]", color="blue")} '
|
||||
f"[{crumbs}] [{elapsed(run)}] Exiting Tool run with output:\n"
|
||||
f'"{run.outputs["output"].strip()}"'
|
||||
)
|
||||
|
||||
def _on_tool_error(self, run: Run) -> None:
|
||||
crumbs = self.get_breadcrumbs(run)
|
||||
print(
|
||||
f"{get_colored_text('[tool/error]', color='red')} "
|
||||
f"[{crumbs}] [{elapsed(run)}] "
|
||||
f"Tool run errored with error:\n"
|
||||
f"{run.error}"
|
||||
)
|
||||
@@ -36,7 +36,7 @@ class _ResponseChain(LLMChain):
|
||||
*,
|
||||
run_manager: Optional[CallbackManagerForChainRun] = None,
|
||||
) -> Tuple[Sequence[str], Sequence[float]]:
|
||||
_, llm_result = self.generate([_input], run_manager=run_manager)
|
||||
llm_result = self.generate([_input], run_manager=run_manager)
|
||||
return self._extract_tokens_and_log_probs(llm_result.generations[0])
|
||||
|
||||
@abstractmethod
|
||||
|
||||
@@ -53,7 +53,7 @@ class HypotheticalDocumentEmbedder(Chain, Embeddings):
|
||||
def embed_query(self, text: str) -> List[float]:
|
||||
"""Generate a hypothetical document and embedded it."""
|
||||
var_name = self.llm_chain.input_keys[0]
|
||||
_, result = self.llm_chain.generate([{var_name: text}])
|
||||
result = self.llm_chain.generate([{var_name: text}])
|
||||
documents = [generation.text for generation in result.generations[0]]
|
||||
embeddings = self.embed_documents(documents)
|
||||
return self.combine_embeddings(embeddings)
|
||||
|
||||
@@ -38,7 +38,6 @@ class LLMChain(Chain):
|
||||
"""Prompt object to use."""
|
||||
llm: BaseLanguageModel
|
||||
output_key: str = "text" #: :meta private:
|
||||
return_formatted_prompt: bool = False
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
@@ -60,45 +59,37 @@ class LLMChain(Chain):
|
||||
|
||||
:meta private:
|
||||
"""
|
||||
if self.return_formatted_prompt:
|
||||
return [self.output_key, "formatted_prompt"]
|
||||
else:
|
||||
return [self.output_key]
|
||||
return [self.output_key]
|
||||
|
||||
def _call(
|
||||
self,
|
||||
inputs: Dict[str, Any],
|
||||
run_manager: Optional[CallbackManagerForChainRun] = None,
|
||||
) -> Dict[str, str]:
|
||||
prompts, response = self.generate([inputs], run_manager=run_manager)
|
||||
output = self.create_outputs(response)[0]
|
||||
if self.return_formatted_prompt:
|
||||
output["formatted_prompt"] = prompts[0]
|
||||
return output
|
||||
response = self.generate([inputs], run_manager=run_manager)
|
||||
return self.create_outputs(response)[0]
|
||||
|
||||
def generate(
|
||||
self,
|
||||
input_list: List[Dict[str, Any]],
|
||||
run_manager: Optional[CallbackManagerForChainRun] = None,
|
||||
) -> Tuple[List[PromptValue], LLMResult]:
|
||||
) -> LLMResult:
|
||||
"""Generate LLM result from inputs."""
|
||||
prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
|
||||
result = self.llm.generate_prompt(
|
||||
return self.llm.generate_prompt(
|
||||
prompts, stop, callbacks=run_manager.get_child() if run_manager else None
|
||||
)
|
||||
return prompts, result
|
||||
|
||||
async def agenerate(
|
||||
self,
|
||||
input_list: List[Dict[str, Any]],
|
||||
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
|
||||
) -> Tuple[List[PromptValue], LLMResult]:
|
||||
) -> LLMResult:
|
||||
"""Generate LLM result from inputs."""
|
||||
prompts, stop = await self.aprep_prompts(input_list, run_manager=run_manager)
|
||||
result = await self.llm.agenerate_prompt(
|
||||
prompts, stop = await self.aprep_prompts(input_list)
|
||||
return await self.llm.agenerate_prompt(
|
||||
prompts, stop, callbacks=run_manager.get_child() if run_manager else None
|
||||
)
|
||||
return prompts, result
|
||||
|
||||
def prep_prompts(
|
||||
self,
|
||||
@@ -160,15 +151,12 @@ class LLMChain(Chain):
|
||||
{"input_list": input_list},
|
||||
)
|
||||
try:
|
||||
prompts, response = self.generate(input_list, run_manager=run_manager)
|
||||
response = self.generate(input_list, run_manager=run_manager)
|
||||
except (KeyboardInterrupt, Exception) as e:
|
||||
run_manager.on_chain_error(e)
|
||||
raise e
|
||||
outputs = self.create_outputs(response)
|
||||
run_manager.on_chain_end({"outputs": outputs})
|
||||
if self.return_formatted_prompt:
|
||||
for i, o in enumerate(outputs):
|
||||
o["formatted_prompt"] = prompts[i]
|
||||
return outputs
|
||||
|
||||
async def aapply(
|
||||
@@ -183,20 +171,15 @@ class LLMChain(Chain):
|
||||
{"input_list": input_list},
|
||||
)
|
||||
try:
|
||||
prompts, response = await self.agenerate(
|
||||
input_list, run_manager=run_manager
|
||||
)
|
||||
response = await self.agenerate(input_list, run_manager=run_manager)
|
||||
except (KeyboardInterrupt, Exception) as e:
|
||||
await run_manager.on_chain_error(e)
|
||||
raise e
|
||||
outputs = self.create_outputs(response)
|
||||
await run_manager.on_chain_end({"outputs": outputs})
|
||||
if self.return_formatted_prompt:
|
||||
for i, o in enumerate(outputs):
|
||||
o["formatted_prompt"] = prompts[i]
|
||||
return outputs
|
||||
|
||||
def create_outputs(self, response: LLMResult) -> List[Dict[str, Any]]:
|
||||
def create_outputs(self, response: LLMResult) -> List[Dict[str, str]]:
|
||||
"""Create outputs from response."""
|
||||
return [
|
||||
# Get the text of the top generated string.
|
||||
@@ -209,11 +192,8 @@ class LLMChain(Chain):
|
||||
inputs: Dict[str, Any],
|
||||
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
|
||||
) -> Dict[str, str]:
|
||||
prompts, response = await self.agenerate([inputs], run_manager=run_manager)
|
||||
output = self.create_outputs(response)[0]
|
||||
if self.return_formatted_prompt:
|
||||
output["formatted_prompt"] = prompts[0]
|
||||
return output
|
||||
response = await self.agenerate([inputs], run_manager=run_manager)
|
||||
return self.create_outputs(response)[0]
|
||||
|
||||
def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
|
||||
"""Format prompt with kwargs and pass to LLM.
|
||||
|
||||
@@ -307,9 +307,7 @@ def _load_sql_database_chain(config: dict, **kwargs: Any) -> SQLDatabaseChain:
|
||||
if "prompt" in config:
|
||||
prompt_config = config.pop("prompt")
|
||||
prompt = load_prompt_from_config(prompt_config)
|
||||
else:
|
||||
prompt = None
|
||||
return SQLDatabaseChain.from_llm(llm, database, prompt=prompt, **config)
|
||||
return SQLDatabaseChain(database=database, llm=llm, prompt=prompt, **config)
|
||||
|
||||
|
||||
def _load_vector_db_qa_with_sources_chain(
|
||||
|
||||
@@ -52,7 +52,7 @@ class QAGenerationChain(Chain):
|
||||
run_manager: Optional[CallbackManagerForChainRun] = None,
|
||||
) -> Dict[str, List]:
|
||||
docs = self.text_splitter.create_documents([inputs[self.input_key]])
|
||||
_, results = self.llm_chain.generate(
|
||||
results = self.llm_chain.generate(
|
||||
[{"text": d.page_content} for d in docs], run_manager=run_manager
|
||||
)
|
||||
qa = [json.loads(res[0].text) for res in results.generations]
|
||||
|
||||
@@ -18,8 +18,6 @@ from langchain.chains.query_constructor.prompt import (
|
||||
DEFAULT_SCHEMA,
|
||||
DEFAULT_SUFFIX,
|
||||
EXAMPLE_PROMPT,
|
||||
EXAMPLES_WITH_LIMIT,
|
||||
SCHEMA_WITH_LIMIT,
|
||||
)
|
||||
from langchain.chains.query_constructor.schema import AttributeInfo
|
||||
from langchain.output_parsers.structured import parse_json_markdown
|
||||
@@ -40,11 +38,7 @@ class StructuredQueryOutputParser(BaseOutputParser[StructuredQuery]):
|
||||
parsed["filter"] = None
|
||||
else:
|
||||
parsed["filter"] = self.ast_parse(parsed["filter"])
|
||||
return StructuredQuery(
|
||||
query=parsed["query"],
|
||||
filter=parsed["filter"],
|
||||
limit=parsed.get("limit"),
|
||||
)
|
||||
return StructuredQuery(query=parsed["query"], filter=parsed["filter"])
|
||||
except Exception as e:
|
||||
raise OutputParserException(
|
||||
f"Parsing text\n{text}\n raised following error:\n{e}"
|
||||
@@ -76,25 +70,15 @@ def _get_prompt(
|
||||
examples: Optional[List] = None,
|
||||
allowed_comparators: Optional[Sequence[Comparator]] = None,
|
||||
allowed_operators: Optional[Sequence[Operator]] = None,
|
||||
enable_limit: bool = False,
|
||||
) -> BasePromptTemplate:
|
||||
attribute_str = _format_attribute_info(attribute_info)
|
||||
examples = examples or DEFAULT_EXAMPLES
|
||||
allowed_comparators = allowed_comparators or list(Comparator)
|
||||
allowed_operators = allowed_operators or list(Operator)
|
||||
if enable_limit:
|
||||
schema = SCHEMA_WITH_LIMIT.format(
|
||||
allowed_comparators=" | ".join(allowed_comparators),
|
||||
allowed_operators=" | ".join(allowed_operators),
|
||||
)
|
||||
|
||||
examples = examples or EXAMPLES_WITH_LIMIT
|
||||
else:
|
||||
schema = DEFAULT_SCHEMA.format(
|
||||
allowed_comparators=" | ".join(allowed_comparators),
|
||||
allowed_operators=" | ".join(allowed_operators),
|
||||
)
|
||||
|
||||
examples = examples or DEFAULT_EXAMPLES
|
||||
schema = DEFAULT_SCHEMA.format(
|
||||
allowed_comparators=" | ".join(allowed_comparators),
|
||||
allowed_operators=" | ".join(allowed_operators),
|
||||
)
|
||||
prefix = DEFAULT_PREFIX.format(schema=schema)
|
||||
suffix = DEFAULT_SUFFIX.format(
|
||||
i=len(examples) + 1, content=document_contents, attributes=attribute_str
|
||||
@@ -103,7 +87,7 @@ def _get_prompt(
|
||||
allowed_comparators=allowed_comparators, allowed_operators=allowed_operators
|
||||
)
|
||||
return FewShotPromptTemplate(
|
||||
examples=examples,
|
||||
examples=DEFAULT_EXAMPLES,
|
||||
example_prompt=EXAMPLE_PROMPT,
|
||||
input_variables=["query"],
|
||||
suffix=suffix,
|
||||
@@ -119,7 +103,6 @@ def load_query_constructor_chain(
|
||||
examples: Optional[List] = None,
|
||||
allowed_comparators: Optional[Sequence[Comparator]] = None,
|
||||
allowed_operators: Optional[Sequence[Operator]] = None,
|
||||
enable_limit: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> LLMChain:
|
||||
prompt = _get_prompt(
|
||||
@@ -128,6 +111,5 @@ def load_query_constructor_chain(
|
||||
examples=examples,
|
||||
allowed_comparators=allowed_comparators,
|
||||
allowed_operators=allowed_operators,
|
||||
enable_limit=enable_limit,
|
||||
)
|
||||
return LLMChain(llm=llm, prompt=prompt, **kwargs)
|
||||
|
||||
@@ -81,4 +81,3 @@ class Operation(FilterDirective):
|
||||
class StructuredQuery(Expr):
|
||||
query: str
|
||||
filter: Optional[FilterDirective]
|
||||
limit: Optional[int]
|
||||
|
||||
@@ -46,16 +46,6 @@ NO_FILTER_ANSWER = """\
|
||||
```\
|
||||
"""
|
||||
|
||||
WITH_LIMIT_ANSWER = """\
|
||||
```json
|
||||
{{
|
||||
"query": "love",
|
||||
"filter": "NO_FILTER",
|
||||
"limit": 2
|
||||
}}
|
||||
```\
|
||||
"""
|
||||
|
||||
DEFAULT_EXAMPLES = [
|
||||
{
|
||||
"i": 1,
|
||||
@@ -71,27 +61,6 @@ DEFAULT_EXAMPLES = [
|
||||
},
|
||||
]
|
||||
|
||||
EXAMPLES_WITH_LIMIT = [
|
||||
{
|
||||
"i": 1,
|
||||
"data_source": SONG_DATA_SOURCE,
|
||||
"user_query": "What are songs by Taylor Swift or Katy Perry about teenage romance under 3 minutes long in the dance pop genre",
|
||||
"structured_request": FULL_ANSWER,
|
||||
},
|
||||
{
|
||||
"i": 2,
|
||||
"data_source": SONG_DATA_SOURCE,
|
||||
"user_query": "What are songs that were not published on Spotify",
|
||||
"structured_request": NO_FILTER_ANSWER,
|
||||
},
|
||||
{
|
||||
"i": 3,
|
||||
"data_source": SONG_DATA_SOURCE,
|
||||
"user_query": "What are three songs about love",
|
||||
"structured_request": WITH_LIMIT_ANSWER,
|
||||
},
|
||||
]
|
||||
|
||||
EXAMPLE_PROMPT_TEMPLATE = """\
|
||||
<< Example {i}. >>
|
||||
Data Source:
|
||||
@@ -147,45 +116,6 @@ Make sure that filters are only used as needed. If there are no filters that sho
|
||||
applied return "NO_FILTER" for the filter value.\
|
||||
"""
|
||||
|
||||
SCHEMA_WITH_LIMIT = """\
|
||||
<< Structured Request Schema >>
|
||||
When responding use a markdown code snippet with a JSON object formatted in the \
|
||||
following schema:
|
||||
|
||||
```json
|
||||
{{{{
|
||||
"query": string \\ text string to compare to document contents
|
||||
"filter": string \\ logical condition statement for filtering documents
|
||||
"limit": int \\ the number of documents to retrieve
|
||||
}}}}
|
||||
```
|
||||
|
||||
The query string should contain only text that is expected to match the contents of \
|
||||
documents. Any conditions in the filter should not be mentioned in the query as well.
|
||||
|
||||
A logical condition statement is composed of one or more comparison and logical \
|
||||
operation statements.
|
||||
|
||||
A comparison statement takes the form: `comp(attr, val)`:
|
||||
- `comp` ({allowed_comparators}): comparator
|
||||
- `attr` (string): name of attribute to apply the comparison to
|
||||
- `val` (string): is the comparison value
|
||||
|
||||
A logical operation statement takes the form `op(statement1, statement2, ...)`:
|
||||
- `op` ({allowed_operators}): logical operator
|
||||
- `statement1`, `statement2`, ... (comparison statements or logical operation \
|
||||
statements): one or more statements to apply the operation to
|
||||
|
||||
Make sure that you only use the comparators and logical operators listed above and \
|
||||
no others.
|
||||
Make sure that filters only refer to attributes that exist in the data source.
|
||||
Make sure that filters take into account the descriptions of attributes and only make \
|
||||
comparisons that are feasible given the type of data being stored.
|
||||
Make sure that filters are only used as needed. If there are no filters that should be \
|
||||
applied return "NO_FILTER" for the filter value.
|
||||
Make sure the `limit` is always an int value. It is an optional parameter so leave it blank if it is does not make sense.
|
||||
"""
|
||||
|
||||
DEFAULT_PREFIX = """\
|
||||
Your goal is to structure the user's query to match the request schema provided below.
|
||||
|
||||
|
||||
@@ -130,7 +130,7 @@ class SQLDatabaseChain(Chain):
|
||||
template=QUERY_CHECKER, input_variables=["query", "dialect"]
|
||||
)
|
||||
query_checker_chain = LLMChain(
|
||||
llm=self.llm_chain.llm, prompt=query_checker_prompt
|
||||
llm=self.llm, prompt=query_checker_prompt
|
||||
)
|
||||
query_checker_inputs = {
|
||||
"query": sql_cmd,
|
||||
@@ -223,8 +223,8 @@ class SQLDatabaseSequentialChain(Chain):
|
||||
**kwargs: Any,
|
||||
) -> SQLDatabaseSequentialChain:
|
||||
"""Load the necessary chains."""
|
||||
sql_chain = SQLDatabaseChain.from_llm(
|
||||
llm, database, prompt=query_prompt, **kwargs
|
||||
sql_chain = SQLDatabaseChain(
|
||||
llm=llm, database=database, prompt=query_prompt, **kwargs
|
||||
)
|
||||
decider_chain = LLMChain(
|
||||
llm=llm, prompt=decider_prompt, output_key="table_names"
|
||||
|
||||
@@ -2,7 +2,6 @@ import asyncio
|
||||
import inspect
|
||||
import warnings
|
||||
from abc import ABC, abstractmethod
|
||||
from functools import partial
|
||||
from typing import Any, Dict, List, Mapping, Optional, Sequence
|
||||
|
||||
from pydantic import Extra, Field, root_validator
|
||||
@@ -240,12 +239,3 @@ class SimpleChatModel(BaseChatModel):
|
||||
run_manager: Optional[CallbackManagerForLLMRun] = None,
|
||||
) -> str:
|
||||
"""Simpler interface."""
|
||||
|
||||
async def _agenerate(
|
||||
self,
|
||||
messages: List[BaseMessage],
|
||||
stop: Optional[List[str]] = None,
|
||||
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
|
||||
) -> ChatResult:
|
||||
func = partial(self._generate, messages, stop=stop, run_manager=run_manager)
|
||||
return await asyncio.get_event_loop().run_in_executor(None, func)
|
||||
|
||||
@@ -1,17 +1,9 @@
|
||||
"""Wrapper around Google's PaLM Chat API."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Mapping, Optional
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Mapping, Optional
|
||||
|
||||
from pydantic import BaseModel, root_validator
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
retry_if_exception_type,
|
||||
stop_after_attempt,
|
||||
wait_exponential,
|
||||
)
|
||||
|
||||
from langchain.callbacks.manager import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
@@ -32,8 +24,6 @@ from langchain.utils import get_from_dict_or_env
|
||||
if TYPE_CHECKING:
|
||||
import google.generativeai as genai
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ChatGooglePalmError(Exception):
|
||||
pass
|
||||
@@ -166,51 +156,6 @@ def _messages_to_prompt_dict(
|
||||
)
|
||||
|
||||
|
||||
def _create_retry_decorator() -> Callable[[Any], Any]:
|
||||
"""Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions"""
|
||||
import google.api_core.exceptions
|
||||
|
||||
multiplier = 2
|
||||
min_seconds = 1
|
||||
max_seconds = 60
|
||||
max_retries = 10
|
||||
|
||||
return retry(
|
||||
reraise=True,
|
||||
stop=stop_after_attempt(max_retries),
|
||||
wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),
|
||||
retry=(
|
||||
retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)
|
||||
| retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)
|
||||
| retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)
|
||||
),
|
||||
before_sleep=before_sleep_log(logger, logging.WARNING),
|
||||
)
|
||||
|
||||
|
||||
def chat_with_retry(llm: ChatGooglePalm, **kwargs: Any) -> Any:
|
||||
"""Use tenacity to retry the completion call."""
|
||||
retry_decorator = _create_retry_decorator()
|
||||
|
||||
@retry_decorator
|
||||
def _chat_with_retry(**kwargs: Any) -> Any:
|
||||
return llm.client.chat(**kwargs)
|
||||
|
||||
return _chat_with_retry(**kwargs)
|
||||
|
||||
|
||||
async def achat_with_retry(llm: ChatGooglePalm, **kwargs: Any) -> Any:
|
||||
"""Use tenacity to retry the async completion call."""
|
||||
retry_decorator = _create_retry_decorator()
|
||||
|
||||
@retry_decorator
|
||||
async def _achat_with_retry(**kwargs: Any) -> Any:
|
||||
# Use OpenAI's async api https://github.com/openai/openai-python#async-api
|
||||
return await llm.client.chat_async(**kwargs)
|
||||
|
||||
return await _achat_with_retry(**kwargs)
|
||||
|
||||
|
||||
class ChatGooglePalm(BaseChatModel, BaseModel):
|
||||
"""Wrapper around Google's PaLM Chat API.
|
||||
|
||||
@@ -282,8 +227,7 @@ class ChatGooglePalm(BaseChatModel, BaseModel):
|
||||
) -> ChatResult:
|
||||
prompt = _messages_to_prompt_dict(messages)
|
||||
|
||||
response: genai.types.ChatResponse = chat_with_retry(
|
||||
self,
|
||||
response: genai.types.ChatResponse = self.client.chat(
|
||||
model=self.model_name,
|
||||
prompt=prompt,
|
||||
temperature=self.temperature,
|
||||
@@ -302,8 +246,7 @@ class ChatGooglePalm(BaseChatModel, BaseModel):
|
||||
) -> ChatResult:
|
||||
prompt = _messages_to_prompt_dict(messages)
|
||||
|
||||
response: genai.types.ChatResponse = await achat_with_retry(
|
||||
self,
|
||||
response: genai.types.ChatResponse = await self.client.chat_async(
|
||||
model=self.model_name,
|
||||
prompt=prompt,
|
||||
temperature=self.temperature,
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
server {
|
||||
listen 80;
|
||||
server_name localhost;
|
||||
error_log /var/log/nginx/error.log warn;
|
||||
|
||||
location / {
|
||||
root /usr/share/nginx/html;
|
||||
index index.html index.htm;
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
|
||||
error_page 500 502 503 504 /50x.html;
|
||||
location = /50x.html {
|
||||
root /usr/share/nginx/html;
|
||||
}
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
version: '3'
|
||||
services:
|
||||
ngrok:
|
||||
image: ngrok/ngrok:latest
|
||||
restart: unless-stopped
|
||||
command:
|
||||
- "start"
|
||||
- "--all"
|
||||
- "--config"
|
||||
- "/etc/ngrok.yml"
|
||||
volumes:
|
||||
- ./ngrok_config.yaml:/etc/ngrok.yml
|
||||
ports:
|
||||
- 4040:4040
|
||||
langchain-backend:
|
||||
depends_on:
|
||||
- ngrok
|
||||
@@ -1,50 +0,0 @@
|
||||
version: '3'
|
||||
services:
|
||||
langchain-frontend:
|
||||
image: langchain/langchainplus-frontend:latest
|
||||
ports:
|
||||
- 80:80
|
||||
environment:
|
||||
- BACKEND_URL=http://langchain-backend:8000
|
||||
- PUBLIC_BASE_URL=http://localhost:8000
|
||||
- PUBLIC_DEV_MODE=true
|
||||
depends_on:
|
||||
- langchain-backend
|
||||
volumes:
|
||||
- ./conf/nginx.conf:/etc/nginx/default.conf:ro
|
||||
build:
|
||||
context: frontend-react/.
|
||||
dockerfile: Dockerfile
|
||||
langchain-backend:
|
||||
image: langchain/langchainplus-backend:latest
|
||||
environment:
|
||||
- PORT=8000
|
||||
- LANGCHAIN_ENV=local_docker
|
||||
- LOG_LEVEL=warning
|
||||
ports:
|
||||
- 8000:8000
|
||||
depends_on:
|
||||
- langchain-db
|
||||
build:
|
||||
context: backend/.
|
||||
dockerfile: Dockerfile
|
||||
langchain-db:
|
||||
image: postgres:14.1
|
||||
command:
|
||||
[
|
||||
"postgres",
|
||||
"-c",
|
||||
"log_min_messages=WARNING",
|
||||
"-c",
|
||||
"client_min_messages=WARNING"
|
||||
]
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=postgres
|
||||
- POSTGRES_USER=postgres
|
||||
- POSTGRES_DB=postgres
|
||||
volumes:
|
||||
- langchain-db-data:/var/lib/postgresql/data
|
||||
ports:
|
||||
- 5433:5432
|
||||
volumes:
|
||||
langchain-db-data:
|
||||
@@ -1,250 +0,0 @@
|
||||
import argparse
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
from contextlib import contextmanager
|
||||
from pathlib import Path
|
||||
from subprocess import CalledProcessError
|
||||
from typing import Generator, List, Optional
|
||||
|
||||
import requests
|
||||
import yaml
|
||||
|
||||
from langchain.env import get_runtime_environment
|
||||
|
||||
logging.basicConfig(level=logging.INFO, format="%(message)s")
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
_DIR = Path(__file__).parent
|
||||
|
||||
|
||||
def get_docker_compose_command() -> List[str]:
|
||||
"""Get the correct docker compose command for this system."""
|
||||
try:
|
||||
subprocess.check_call(
|
||||
["docker", "compose", "--version"],
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL,
|
||||
)
|
||||
return ["docker", "compose"]
|
||||
except (CalledProcessError, FileNotFoundError):
|
||||
try:
|
||||
subprocess.check_call(
|
||||
["docker-compose", "--version"],
|
||||
stdout=subprocess.DEVNULL,
|
||||
stderr=subprocess.DEVNULL,
|
||||
)
|
||||
return ["docker-compose"]
|
||||
except (CalledProcessError, FileNotFoundError):
|
||||
raise ValueError(
|
||||
"Neither 'docker compose' nor 'docker-compose'"
|
||||
" commands are available. Please install the Docker"
|
||||
" server following the instructions for your operating"
|
||||
" system at https://docs.docker.com/engine/install/"
|
||||
)
|
||||
|
||||
|
||||
def get_ngrok_url(auth_token: Optional[str]) -> str:
|
||||
"""Get the ngrok URL for the LangChainPlus server."""
|
||||
ngrok_url = "http://localhost:4040/api/tunnels"
|
||||
try:
|
||||
response = requests.get(ngrok_url)
|
||||
response.raise_for_status()
|
||||
exposed_url = response.json()["tunnels"][0]["public_url"]
|
||||
except requests.exceptions.HTTPError:
|
||||
raise ValueError("Could not connect to ngrok console.")
|
||||
except (KeyError, IndexError):
|
||||
message = "ngrok failed to start correctly. "
|
||||
if auth_token is not None:
|
||||
message += "Please check that your authtoken is correct."
|
||||
raise ValueError(message)
|
||||
return exposed_url
|
||||
|
||||
|
||||
@contextmanager
|
||||
def create_ngrok_config(
|
||||
auth_token: Optional[str] = None,
|
||||
) -> Generator[Path, None, None]:
|
||||
"""Create the ngrok configuration file."""
|
||||
config_path = _DIR / "ngrok_config.yaml"
|
||||
if config_path.exists():
|
||||
# If there was an error in a prior run, it's possible
|
||||
# Docker made this a directory instead of a file
|
||||
if config_path.is_dir():
|
||||
shutil.rmtree(config_path)
|
||||
else:
|
||||
config_path.unlink()
|
||||
ngrok_config = {
|
||||
"tunnels": {
|
||||
"langchain": {
|
||||
"proto": "http",
|
||||
"addr": "langchain-backend:8000",
|
||||
}
|
||||
},
|
||||
"version": "2",
|
||||
"region": "us",
|
||||
}
|
||||
if auth_token is not None:
|
||||
ngrok_config["authtoken"] = auth_token
|
||||
config_path = _DIR / "ngrok_config.yaml"
|
||||
with config_path.open("w") as f:
|
||||
yaml.dump(ngrok_config, f)
|
||||
yield config_path
|
||||
# Delete the config file after use
|
||||
config_path.unlink(missing_ok=True)
|
||||
|
||||
|
||||
class PlusCommand:
|
||||
"""Manage the LangChainPlus Tracing server."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.docker_compose_command = get_docker_compose_command()
|
||||
self.docker_compose_file = (
|
||||
Path(__file__).absolute().parent / "docker-compose.yaml"
|
||||
)
|
||||
self.ngrok_path = Path(__file__).absolute().parent / "docker-compose.ngrok.yaml"
|
||||
|
||||
def _open_browser(self, url: str) -> None:
|
||||
try:
|
||||
subprocess.run(["open", url])
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
|
||||
def _start_local(self) -> None:
|
||||
command = [
|
||||
*self.docker_compose_command,
|
||||
"-f",
|
||||
str(self.docker_compose_file),
|
||||
]
|
||||
subprocess.run(
|
||||
[
|
||||
*command,
|
||||
"up",
|
||||
"--pull=always",
|
||||
"--quiet-pull",
|
||||
"--wait",
|
||||
]
|
||||
)
|
||||
logger.info(
|
||||
"langchain plus server is running at http://localhost. To connect"
|
||||
" locally, set the following environment variable"
|
||||
" when running your LangChain application."
|
||||
)
|
||||
|
||||
logger.info("\tLANGCHAIN_TRACING_V2=true")
|
||||
self._open_browser("http://localhost")
|
||||
|
||||
def _start_and_expose(self, auth_token: Optional[str]) -> None:
|
||||
with create_ngrok_config(auth_token=auth_token):
|
||||
command = [
|
||||
*self.docker_compose_command,
|
||||
"-f",
|
||||
str(self.docker_compose_file),
|
||||
"-f",
|
||||
str(self.ngrok_path),
|
||||
]
|
||||
subprocess.run(
|
||||
[
|
||||
*command,
|
||||
"up",
|
||||
"--pull=always",
|
||||
"--quiet-pull",
|
||||
"--wait",
|
||||
]
|
||||
)
|
||||
logger.info(
|
||||
"ngrok is running. You can view the dashboard at http://0.0.0.0:4040"
|
||||
)
|
||||
ngrok_url = get_ngrok_url(auth_token)
|
||||
logger.info(
|
||||
"langchain plus server is running at http://localhost."
|
||||
" To connect remotely, set the following environment"
|
||||
" variable when running your LangChain application."
|
||||
)
|
||||
logger.info("\tLANGCHAIN_TRACING_V2=true")
|
||||
logger.info(f"\tLANGCHAIN_ENDPOINT={ngrok_url}")
|
||||
self._open_browser("http://0.0.0.0:4040")
|
||||
self._open_browser("http://localhost")
|
||||
|
||||
def start(self, *, expose: bool = False, auth_token: Optional[str] = None) -> None:
|
||||
"""Run the LangChainPlus server locally.
|
||||
|
||||
Args:
|
||||
expose: If True, expose the server to the internet using ngrok.
|
||||
auth_token: The ngrok authtoken to use (visible in the ngrok dashboard).
|
||||
If not provided, ngrok server session length will be restricted.
|
||||
"""
|
||||
|
||||
if expose:
|
||||
self._start_and_expose(auth_token=auth_token)
|
||||
else:
|
||||
self._start_local()
|
||||
|
||||
def stop(self) -> None:
|
||||
"""Stop the LangChainPlus server."""
|
||||
subprocess.run(
|
||||
[
|
||||
*self.docker_compose_command,
|
||||
"-f",
|
||||
str(self.docker_compose_file),
|
||||
"-f",
|
||||
str(self.ngrok_path),
|
||||
"down",
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def env() -> None:
|
||||
"""Print the runtime environment information."""
|
||||
env = get_runtime_environment()
|
||||
logger.info("LangChain Environment:")
|
||||
logger.info("\n".join(f"{k}:{v}" for k, v in env.items()))
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Main entrypoint for the CLI."""
|
||||
parser = argparse.ArgumentParser()
|
||||
subparsers = parser.add_subparsers(description="LangChainPlus CLI commands")
|
||||
|
||||
server_command = PlusCommand()
|
||||
server_parser = subparsers.add_parser("plus", description=server_command.__doc__)
|
||||
server_subparsers = server_parser.add_subparsers()
|
||||
|
||||
server_start_parser = server_subparsers.add_parser(
|
||||
"start", description="Start the LangChainPlus server."
|
||||
)
|
||||
server_start_parser.add_argument(
|
||||
"--expose",
|
||||
action="store_true",
|
||||
help="Expose the server to the internet using ngrok.",
|
||||
)
|
||||
server_start_parser.add_argument(
|
||||
"--ngrok-authtoken",
|
||||
default=os.getenv("NGROK_AUTHTOKEN"),
|
||||
help="The ngrok authtoken to use (visible in the ngrok dashboard)."
|
||||
" If not provided, ngrok server session length will be restricted.",
|
||||
)
|
||||
server_start_parser.set_defaults(
|
||||
func=lambda args: server_command.start(
|
||||
expose=args.expose, auth_token=args.ngrok_authtoken
|
||||
)
|
||||
)
|
||||
|
||||
server_stop_parser = server_subparsers.add_parser(
|
||||
"stop", description="Stop the LangChainPlus server."
|
||||
)
|
||||
server_stop_parser.set_defaults(func=lambda args: server_command.stop())
|
||||
|
||||
env_parser = subparsers.add_parser("env")
|
||||
env_parser.set_defaults(func=lambda args: env())
|
||||
|
||||
args = parser.parse_args()
|
||||
if not hasattr(args, "func"):
|
||||
parser.print_help()
|
||||
return
|
||||
args.func(args)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -26,17 +26,11 @@ from pydantic import BaseSettings, Field, root_validator
|
||||
from requests import Response
|
||||
|
||||
from langchain.base_language import BaseLanguageModel
|
||||
from langchain.callbacks.manager import tracing_v2_enabled
|
||||
from langchain.callbacks.tracers.langchain import LangChainTracer
|
||||
from langchain.callbacks.tracers.schemas import Run, TracerSession
|
||||
from langchain.chains.base import Chain
|
||||
from langchain.chat_models.base import BaseChatModel
|
||||
from langchain.client.models import (
|
||||
Dataset,
|
||||
DatasetCreate,
|
||||
Example,
|
||||
ExampleCreate,
|
||||
ListRunsQueryParams,
|
||||
)
|
||||
from langchain.client.models import Dataset, DatasetCreate, Example, ExampleCreate
|
||||
from langchain.llms.base import BaseLLM
|
||||
from langchain.schema import ChatResult, LLMResult, messages_from_dict
|
||||
from langchain.utils import raise_for_status_with_text, xor_args
|
||||
@@ -199,71 +193,6 @@ class LangChainPlusClient(BaseSettings):
|
||||
raise ValueError(f"Dataset {file_name} already exists")
|
||||
return Dataset(**result)
|
||||
|
||||
def read_run(self, run_id: str) -> Run:
|
||||
"""Read a run from the LangChain+ API."""
|
||||
response = self._get(f"/runs/{run_id}")
|
||||
raise_for_status_with_text(response)
|
||||
return Run(**response.json())
|
||||
|
||||
def list_runs(
|
||||
self,
|
||||
*,
|
||||
session_id: Optional[str] = None,
|
||||
session_name: Optional[str] = None,
|
||||
run_type: Optional[str] = None,
|
||||
**kwargs: Any,
|
||||
) -> List[Run]:
|
||||
"""List runs from the LangChain+ API."""
|
||||
if session_name is not None:
|
||||
if session_id is not None:
|
||||
raise ValueError("Only one of session_id or session_name may be given")
|
||||
session_id = self.read_session(session_name=session_name).id
|
||||
query_params = ListRunsQueryParams(
|
||||
session_id=session_id, run_type=run_type, **kwargs
|
||||
)
|
||||
filtered_params = {
|
||||
k: v for k, v in query_params.dict().items() if v is not None
|
||||
}
|
||||
response = self._get("/runs", params=filtered_params)
|
||||
raise_for_status_with_text(response)
|
||||
return [Run(**run) for run in response.json()]
|
||||
|
||||
@xor_args(("session_id", "session_name"))
|
||||
def read_session(
|
||||
self, *, session_id: Optional[str] = None, session_name: Optional[str] = None
|
||||
) -> TracerSession:
|
||||
"""Read a session from the LangChain+ API."""
|
||||
path = "/sessions"
|
||||
params: Dict[str, Any] = {"limit": 1, "tenant_id": self.tenant_id}
|
||||
if session_id is not None:
|
||||
path += f"/{session_id}"
|
||||
elif session_name is not None:
|
||||
params["name"] = session_name
|
||||
else:
|
||||
raise ValueError("Must provide dataset_name or dataset_id")
|
||||
response = self._get(
|
||||
path,
|
||||
params=params,
|
||||
)
|
||||
raise_for_status_with_text(response)
|
||||
response = self._get(
|
||||
path,
|
||||
params=params,
|
||||
)
|
||||
raise_for_status_with_text(response)
|
||||
result = response.json()
|
||||
if isinstance(result, list):
|
||||
if len(result) == 0:
|
||||
raise ValueError(f"Dataset {session_name} not found")
|
||||
return TracerSession(**result[0])
|
||||
return TracerSession(**response.json())
|
||||
|
||||
def list_sessions(self) -> List[TracerSession]:
|
||||
"""List sessions from the LangChain+ API."""
|
||||
response = self._get("/sessions")
|
||||
raise_for_status_with_text(response)
|
||||
return [TracerSession(**session) for session in response.json()]
|
||||
|
||||
def create_dataset(self, dataset_name: str, description: str) -> Dataset:
|
||||
"""Create a dataset in the LangChain+ API."""
|
||||
dataset = DatasetCreate(
|
||||
@@ -422,13 +351,14 @@ class LangChainPlusClient(BaseSettings):
|
||||
except Exception as e:
|
||||
logger.warning(f"Chain failed for example {example.id}. Error: {e}")
|
||||
outputs.append({"Error": str(e)})
|
||||
langchain_tracer.example_id = previous_example_id
|
||||
finally:
|
||||
langchain_tracer.example_id = previous_example_id
|
||||
return outputs
|
||||
|
||||
@staticmethod
|
||||
async def _gather_with_concurrency(
|
||||
n: int,
|
||||
initializer: Callable[[], Coroutine[Any, Any, LangChainTracer]],
|
||||
initializer: Callable[[], Coroutine[Any, Any, Tuple[LangChainTracer, Dict]]],
|
||||
*async_funcs: Callable[[LangChainTracer, Dict], Coroutine[Any, Any, Any]],
|
||||
) -> List[Any]:
|
||||
"""
|
||||
@@ -443,28 +373,21 @@ class LangChainPlusClient(BaseSettings):
|
||||
A list of results from the coroutines.
|
||||
"""
|
||||
semaphore = asyncio.Semaphore(n)
|
||||
job_state = {"num_processed": 0}
|
||||
|
||||
tracer_queue: asyncio.Queue[LangChainTracer] = asyncio.Queue()
|
||||
for _ in range(n):
|
||||
tracer_queue.put_nowait(await initializer())
|
||||
tracer, job_state = await initializer()
|
||||
|
||||
async def run_coroutine_with_semaphore(
|
||||
async_func: Callable[[LangChainTracer, Dict], Coroutine[Any, Any, Any]]
|
||||
) -> Any:
|
||||
async with semaphore:
|
||||
tracer = await tracer_queue.get()
|
||||
try:
|
||||
result = await async_func(tracer, job_state)
|
||||
finally:
|
||||
tracer_queue.put_nowait(tracer)
|
||||
return result
|
||||
return await async_func(tracer, job_state)
|
||||
|
||||
return await asyncio.gather(
|
||||
*(run_coroutine_with_semaphore(function) for function in async_funcs)
|
||||
)
|
||||
|
||||
async def _tracer_initializer(self, session_name: str) -> LangChainTracer:
|
||||
async def _tracer_initializer(
|
||||
self, session_name: str
|
||||
) -> Tuple[LangChainTracer, dict]:
|
||||
"""
|
||||
Initialize a tracer to share across tasks.
|
||||
|
||||
@@ -474,9 +397,11 @@ class LangChainPlusClient(BaseSettings):
|
||||
Returns:
|
||||
A LangChainTracer instance with an active session.
|
||||
"""
|
||||
tracer = LangChainTracer(session_name=session_name)
|
||||
tracer.ensure_session()
|
||||
return tracer
|
||||
job_state = {"num_processed": 0}
|
||||
with tracing_v2_enabled(session_name=session_name) as session:
|
||||
tracer = LangChainTracer()
|
||||
tracer.session = session
|
||||
return tracer, job_state
|
||||
|
||||
async def arun_on_dataset(
|
||||
self,
|
||||
@@ -588,7 +513,8 @@ class LangChainPlusClient(BaseSettings):
|
||||
except Exception as e:
|
||||
logger.warning(f"Chain failed for example {example.id}. Error: {e}")
|
||||
outputs.append({"Error": str(e)})
|
||||
langchain_tracer.example_id = previous_example_id
|
||||
finally:
|
||||
langchain_tracer.example_id = previous_example_id
|
||||
return outputs
|
||||
|
||||
def run_on_dataset(
|
||||
@@ -624,16 +550,18 @@ class LangChainPlusClient(BaseSettings):
|
||||
dataset = self.read_dataset(dataset_name=dataset_name)
|
||||
examples = list(self.list_examples(dataset_id=str(dataset.id)))
|
||||
results: Dict[str, Any] = {}
|
||||
tracer = LangChainTracer(session_name=session_name)
|
||||
tracer.ensure_session()
|
||||
for i, example in enumerate(examples):
|
||||
result = self.run_llm_or_chain(
|
||||
example,
|
||||
tracer,
|
||||
llm_or_chain_factory,
|
||||
num_repetitions,
|
||||
)
|
||||
if verbose:
|
||||
print(f"{i+1} processed", flush=True, end="\r")
|
||||
results[str(example.id)] = result
|
||||
with tracing_v2_enabled(session_name=session_name) as session:
|
||||
tracer = LangChainTracer()
|
||||
tracer.session = session
|
||||
|
||||
for i, example in enumerate(examples):
|
||||
result = self.run_llm_or_chain(
|
||||
example,
|
||||
tracer,
|
||||
llm_or_chain_factory,
|
||||
num_repetitions,
|
||||
)
|
||||
if verbose:
|
||||
print(f"{i+1} processed", flush=True, end="\r")
|
||||
results[str(example.id)] = result
|
||||
return results
|
||||
|
||||
@@ -2,9 +2,9 @@ from datetime import datetime
|
||||
from typing import Any, Dict, List, Optional
|
||||
from uuid import UUID
|
||||
|
||||
from pydantic import BaseModel, Field, root_validator
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from langchain.callbacks.tracers.schemas import Run, RunTypeEnum
|
||||
from langchain.callbacks.tracers.schemas import Run
|
||||
|
||||
|
||||
class ExampleBase(BaseModel):
|
||||
@@ -52,48 +52,3 @@ class Dataset(DatasetBase):
|
||||
id: UUID
|
||||
created_at: datetime
|
||||
modified_at: Optional[datetime] = Field(default=None)
|
||||
|
||||
|
||||
class ListRunsQueryParams(BaseModel):
|
||||
"""Query params for GET /runs endpoint."""
|
||||
|
||||
class Config:
|
||||
extra = "forbid"
|
||||
|
||||
id: Optional[List[UUID]]
|
||||
"""Filter runs by id."""
|
||||
parent_run: Optional[UUID]
|
||||
"""Filter runs by parent run."""
|
||||
run_type: Optional[RunTypeEnum]
|
||||
"""Filter runs by type."""
|
||||
session: Optional[UUID] = Field(default=None, alias="session_id")
|
||||
"""Only return runs within a session."""
|
||||
reference_example: Optional[UUID]
|
||||
"""Only return runs that reference the specified dataset example."""
|
||||
execution_order: Optional[int]
|
||||
"""Filter runs by execution order."""
|
||||
error: Optional[bool]
|
||||
"""Whether to return only runs that errored."""
|
||||
offset: Optional[int]
|
||||
"""The offset of the first run to return."""
|
||||
limit: Optional[int]
|
||||
"""The maximum number of runs to return."""
|
||||
start_time: Optional[datetime] = Field(
|
||||
default=None,
|
||||
alias="start_before",
|
||||
description="Query Runs that started <= this time",
|
||||
)
|
||||
end_time: Optional[datetime] = Field(
|
||||
default=None,
|
||||
alias="end_after",
|
||||
description="Query Runs that ended >= this time",
|
||||
)
|
||||
|
||||
@root_validator
|
||||
def validate_time_range(cls, values: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Validate that start_time <= end_time."""
|
||||
start_time = values.get("start_time")
|
||||
end_time = values.get("end_time")
|
||||
if start_time and end_time and start_time > end_time:
|
||||
raise ValueError("start_time must be <= end_time")
|
||||
return values
|
||||
|
||||
@@ -23,7 +23,6 @@ from langchain.document_loaders.dataframe import DataFrameLoader
|
||||
from langchain.document_loaders.diffbot import DiffbotLoader
|
||||
from langchain.document_loaders.directory import DirectoryLoader
|
||||
from langchain.document_loaders.discord import DiscordChatLoader
|
||||
from langchain.document_loaders.docugami import DocugamiLoader
|
||||
from langchain.document_loaders.duckdb_loader import DuckDBLoader
|
||||
from langchain.document_loaders.email import (
|
||||
OutlookMessageLoader,
|
||||
@@ -81,10 +80,7 @@ from langchain.document_loaders.slack_directory import SlackDirectoryLoader
|
||||
from langchain.document_loaders.spreedly import SpreedlyLoader
|
||||
from langchain.document_loaders.srt import SRTLoader
|
||||
from langchain.document_loaders.stripe import StripeLoader
|
||||
from langchain.document_loaders.telegram import (
|
||||
TelegramChatApiLoader,
|
||||
TelegramChatFileLoader,
|
||||
)
|
||||
from langchain.document_loaders.telegram import TelegramChatLoader
|
||||
from langchain.document_loaders.text import TextLoader
|
||||
from langchain.document_loaders.toml import TomlLoader
|
||||
from langchain.document_loaders.twitter import TwitterTweetLoader
|
||||
@@ -113,9 +109,6 @@ from langchain.document_loaders.youtube import (
|
||||
# Legacy: only for backwards compat. Use PyPDFLoader instead
|
||||
PagedPDFSplitter = PyPDFLoader
|
||||
|
||||
# For backwards compatability
|
||||
TelegramChatLoader = TelegramChatFileLoader
|
||||
|
||||
__all__ = [
|
||||
"AZLyricsLoader",
|
||||
"AirbyteJSONLoader",
|
||||
@@ -137,7 +130,6 @@ __all__ = [
|
||||
"DiffbotLoader",
|
||||
"DirectoryLoader",
|
||||
"DiscordChatLoader",
|
||||
"DocugamiLoader",
|
||||
"Docx2txtLoader",
|
||||
"DuckDBLoader",
|
||||
"EverNoteLoader",
|
||||
@@ -169,12 +161,12 @@ __all__ = [
|
||||
"OutlookMessageLoader",
|
||||
"PDFMinerLoader",
|
||||
"PDFMinerPDFasHTMLLoader",
|
||||
"PDFPlumberLoader",
|
||||
"PagedPDFSplitter",
|
||||
"PlaywrightURLLoader",
|
||||
"PyMuPDFLoader",
|
||||
"PyPDFDirectoryLoader",
|
||||
"PyPDFLoader",
|
||||
"PDFPlumberLoader",
|
||||
"PyPDFium2Loader",
|
||||
"PythonLoader",
|
||||
"ReadTheDocsLoader",
|
||||
@@ -186,10 +178,9 @@ __all__ = [
|
||||
"SeleniumURLLoader",
|
||||
"SitemapLoader",
|
||||
"SlackDirectoryLoader",
|
||||
"TelegramChatFileLoader",
|
||||
"TelegramChatApiLoader",
|
||||
"SpreedlyLoader",
|
||||
"StripeLoader",
|
||||
"TelegramChatLoader",
|
||||
"TextLoader",
|
||||
"TomlLoader",
|
||||
"TwitterTweetLoader",
|
||||
@@ -212,5 +203,4 @@ __all__ = [
|
||||
"WhatsAppChatLoader",
|
||||
"WikipediaLoader",
|
||||
"YoutubeLoader",
|
||||
"TelegramChatLoader",
|
||||
]
|
||||
|
||||
@@ -60,11 +60,11 @@ class BiliBiliLoader(BaseLoader):
|
||||
raw_sub_titles = json.loads(result.content)["body"]
|
||||
raw_transcript = " ".join([c["content"] for c in raw_sub_titles])
|
||||
|
||||
raw_transcript_with_meta_info = (
|
||||
f"Video Title: {video_info['title']},"
|
||||
f"description: {video_info['desc']}\n\n"
|
||||
f"Transcript: {raw_transcript}"
|
||||
)
|
||||
raw_transcript_with_meta_info = f"""
|
||||
Video Title: {video_info['title']},
|
||||
description: {video_info['desc']}\n
|
||||
Transcript: {raw_transcript}
|
||||
"""
|
||||
return raw_transcript_with_meta_info, video_info
|
||||
else:
|
||||
raw_transcript = ""
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
"""Load Data from a Confluence Space"""
|
||||
import logging
|
||||
from io import BytesIO
|
||||
from typing import Any, Callable, List, Optional, Union
|
||||
|
||||
from tenacity import (
|
||||
@@ -371,10 +370,12 @@ class ConfluenceLoader(BaseLoader):
|
||||
|
||||
def process_attachment(self, page_id: str) -> List[str]:
|
||||
try:
|
||||
import requests # noqa: F401
|
||||
from PIL import Image # noqa: F401
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"`Pillow` package not found, " "please run `pip install Pillow`"
|
||||
"`pytesseract` or `pdf2image` or `Pillow` package not found, "
|
||||
"please run `pip install pytesseract pdf2image Pillow`"
|
||||
)
|
||||
|
||||
# depending on setup you may also need to set the correct path for
|
||||
@@ -418,6 +419,9 @@ class ConfluenceLoader(BaseLoader):
|
||||
"please run `pip install pytesseract pdf2image`"
|
||||
)
|
||||
|
||||
import pytesseract # noqa: F811
|
||||
from pdf2image import convert_from_bytes # noqa: F811
|
||||
|
||||
response = self.confluence.request(path=link, absolute=True)
|
||||
text = ""
|
||||
|
||||
@@ -440,6 +444,8 @@ class ConfluenceLoader(BaseLoader):
|
||||
|
||||
def process_image(self, link: str) -> str:
|
||||
try:
|
||||
from io import BytesIO # noqa: F401
|
||||
|
||||
import pytesseract # noqa: F401
|
||||
from PIL import Image # noqa: F401
|
||||
except ImportError:
|
||||
@@ -466,6 +472,8 @@ class ConfluenceLoader(BaseLoader):
|
||||
|
||||
def process_doc(self, link: str) -> str:
|
||||
try:
|
||||
from io import BytesIO # noqa: F401
|
||||
|
||||
import docx2txt # noqa: F401
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
@@ -514,14 +522,17 @@ class ConfluenceLoader(BaseLoader):
|
||||
|
||||
def process_svg(self, link: str) -> str:
|
||||
try:
|
||||
from io import BytesIO # noqa: F401
|
||||
|
||||
import pytesseract # noqa: F401
|
||||
from PIL import Image # noqa: F401
|
||||
from reportlab.graphics import renderPM # noqa: F401
|
||||
from reportlab.graphics.shapes import Drawing # noqa: F401
|
||||
from svglib.svglib import svg2rlg # noqa: F401
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"`pytesseract`, `Pillow`, `reportlab` or `svglib` package not found, "
|
||||
"please run `pip install pytesseract Pillow reportlab svglib`"
|
||||
"`pytesseract`, `Pillow`, or `svglib` package not found, "
|
||||
"please run `pip install pytesseract Pillow svglib`"
|
||||
)
|
||||
|
||||
response = self.confluence.request(path=link, absolute=True)
|
||||
|
||||
@@ -22,7 +22,7 @@ class DataFrameLoader(BaseLoader):
|
||||
def load(self) -> List[Document]:
|
||||
"""Load from the dataframe."""
|
||||
result = []
|
||||
# For very large dataframes, this needs to yield instead of building a list
|
||||
# For very large dataframes, this needs to yeild instead of building a list
|
||||
# but that would require chaging return type to a generator for BaseLoader
|
||||
# and all its subclasses, which is a bigger refactor. Marking as future TODO.
|
||||
# This change will allow us to extend this to Spark and Dask dataframes.
|
||||
|
||||
@@ -1,343 +0,0 @@
|
||||
"""Loader that loads processed documents from Docugami."""
|
||||
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Mapping, Optional, Sequence
|
||||
|
||||
import requests
|
||||
from pydantic import BaseModel, root_validator
|
||||
|
||||
from langchain.docstore.document import Document
|
||||
from langchain.document_loaders.base import BaseLoader
|
||||
|
||||
TD_NAME = "{http://www.w3.org/1999/xhtml}td"
|
||||
TABLE_NAME = "{http://www.w3.org/1999/xhtml}table"
|
||||
|
||||
XPATH_KEY = "xpath"
|
||||
DOCUMENT_ID_KEY = "id"
|
||||
DOCUMENT_NAME_KEY = "name"
|
||||
STRUCTURE_KEY = "structure"
|
||||
TAG_KEY = "tag"
|
||||
PROJECTS_KEY = "projects"
|
||||
|
||||
DEFAULT_API_ENDPOINT = "https://api.docugami.com/v1preview1"
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DocugamiLoader(BaseLoader, BaseModel):
|
||||
"""Loader that loads processed docs from Docugami.
|
||||
|
||||
To use, you should have the ``lxml`` python package installed.
|
||||
"""
|
||||
|
||||
api: str = DEFAULT_API_ENDPOINT
|
||||
|
||||
access_token: Optional[str] = os.environ.get("DOCUGAMI_API_KEY")
|
||||
docset_id: Optional[str]
|
||||
document_ids: Optional[Sequence[str]]
|
||||
file_paths: Optional[Sequence[Path]]
|
||||
min_chunk_size: int = 32 # appended to the next chunk to avoid over-chunking
|
||||
|
||||
@root_validator
|
||||
def validate_local_or_remote(cls, values: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Validate that either local file paths are given, or remote API docset ID."""
|
||||
if values.get("file_paths") and values.get("docset_id"):
|
||||
raise ValueError("Cannot specify both file_paths and remote API docset_id")
|
||||
|
||||
if not values.get("file_paths") and not values.get("docset_id"):
|
||||
raise ValueError("Must specify either file_paths or remote API docset_id")
|
||||
|
||||
if values.get("docset_id") and not values.get("access_token"):
|
||||
raise ValueError("Must specify access token if using remote API docset_id")
|
||||
|
||||
return values
|
||||
|
||||
def _parse_dgml(
|
||||
self, document: Mapping, content: bytes, doc_metadata: Optional[Mapping] = None
|
||||
) -> List[Document]:
|
||||
"""Parse a single DGML document into a list of Documents."""
|
||||
try:
|
||||
from lxml import etree
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import lxml python package. "
|
||||
"Please install it with `pip install lxml`."
|
||||
)
|
||||
|
||||
# helpers
|
||||
def _xpath_qname_for_chunk(chunk: Any) -> str:
|
||||
"""Get the xpath qname for a chunk."""
|
||||
qname = f"{chunk.prefix}:{chunk.tag.split('}')[-1]}"
|
||||
|
||||
parent = chunk.getparent()
|
||||
if parent is not None:
|
||||
doppelgangers = [x for x in parent if x.tag == chunk.tag]
|
||||
if len(doppelgangers) > 1:
|
||||
idx_of_self = doppelgangers.index(chunk)
|
||||
qname = f"{qname}[{idx_of_self + 1}]"
|
||||
|
||||
return qname
|
||||
|
||||
def _xpath_for_chunk(chunk: Any) -> str:
|
||||
"""Get the xpath for a chunk."""
|
||||
ancestor_chain = chunk.xpath("ancestor-or-self::*")
|
||||
return "/" + "/".join(_xpath_qname_for_chunk(x) for x in ancestor_chain)
|
||||
|
||||
def _structure_value(node: Any) -> str:
|
||||
"""Get the structure value for a node."""
|
||||
structure = (
|
||||
"table"
|
||||
if node.tag == TABLE_NAME
|
||||
else node.attrib["structure"]
|
||||
if "structure" in node.attrib
|
||||
else None
|
||||
)
|
||||
return structure
|
||||
|
||||
def _is_structural(node: Any) -> bool:
|
||||
"""Check if a node is structural."""
|
||||
return _structure_value(node) is not None
|
||||
|
||||
def _is_heading(node: Any) -> bool:
|
||||
"""Check if a node is a heading."""
|
||||
structure = _structure_value(node)
|
||||
return structure is not None and structure.lower().startswith("h")
|
||||
|
||||
def _get_text(node: Any) -> str:
|
||||
"""Get the text of a node."""
|
||||
return " ".join(node.itertext()).strip()
|
||||
|
||||
def _has_structural_descendant(node: Any) -> bool:
|
||||
"""Check if a node has a structural descendant."""
|
||||
for child in node:
|
||||
if _is_structural(child) or _has_structural_descendant(child):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _leaf_structural_nodes(node: Any) -> List:
|
||||
"""Get the leaf structural nodes of a node."""
|
||||
if _is_structural(node) and not _has_structural_descendant(node):
|
||||
return [node]
|
||||
else:
|
||||
leaf_nodes = []
|
||||
for child in node:
|
||||
leaf_nodes.extend(_leaf_structural_nodes(child))
|
||||
return leaf_nodes
|
||||
|
||||
def _create_doc(node: Any, text: str) -> Document:
|
||||
"""Create a Document from a node and text."""
|
||||
metadata = {
|
||||
XPATH_KEY: _xpath_for_chunk(node),
|
||||
DOCUMENT_ID_KEY: document["id"],
|
||||
DOCUMENT_NAME_KEY: document["name"],
|
||||
STRUCTURE_KEY: node.attrib.get("structure", ""),
|
||||
TAG_KEY: re.sub(r"\{.*\}", "", node.tag),
|
||||
}
|
||||
|
||||
if doc_metadata:
|
||||
metadata.update(doc_metadata)
|
||||
|
||||
return Document(
|
||||
page_content=text,
|
||||
metadata=metadata,
|
||||
)
|
||||
|
||||
# parse the tree and return chunks
|
||||
tree = etree.parse(io.BytesIO(content))
|
||||
root = tree.getroot()
|
||||
|
||||
chunks: List[Document] = []
|
||||
prev_small_chunk_text = None
|
||||
for node in _leaf_structural_nodes(root):
|
||||
text = _get_text(node)
|
||||
if prev_small_chunk_text:
|
||||
text = prev_small_chunk_text + " " + text
|
||||
prev_small_chunk_text = None
|
||||
|
||||
if _is_heading(node) or len(text) < self.min_chunk_size:
|
||||
# Save headings or other small chunks to be appended to the next chunk
|
||||
prev_small_chunk_text = text
|
||||
else:
|
||||
chunks.append(_create_doc(node, text))
|
||||
|
||||
if prev_small_chunk_text and len(chunks) > 0:
|
||||
# small chunk at the end left over, just append to last chunk
|
||||
chunks[-1].page_content += " " + prev_small_chunk_text
|
||||
|
||||
return chunks
|
||||
|
||||
def _document_details_for_docset_id(self, docset_id: str) -> List[Dict]:
|
||||
"""Gets all document details for the given docset ID"""
|
||||
url = f"{self.api}/docsets/{docset_id}/documents"
|
||||
all_documents = []
|
||||
|
||||
while url:
|
||||
response = requests.get(
|
||||
url,
|
||||
headers={"Authorization": f"Bearer {self.access_token}"},
|
||||
)
|
||||
if response.ok:
|
||||
data = response.json()
|
||||
all_documents.extend(data["documents"])
|
||||
url = data.get("next", None)
|
||||
else:
|
||||
raise Exception(
|
||||
f"Failed to download {url} (status: {response.status_code})"
|
||||
)
|
||||
|
||||
return all_documents
|
||||
|
||||
def _project_details_for_docset_id(self, docset_id: str) -> List[Dict]:
|
||||
"""Gets all project details for the given docset ID"""
|
||||
url = f"{self.api}/projects?docset.id={docset_id}"
|
||||
all_projects = []
|
||||
|
||||
while url:
|
||||
response = requests.request(
|
||||
"GET",
|
||||
url,
|
||||
headers={"Authorization": f"Bearer {self.access_token}"},
|
||||
data={},
|
||||
)
|
||||
if response.ok:
|
||||
data = response.json()
|
||||
all_projects.extend(data["projects"])
|
||||
url = data.get("next", None)
|
||||
else:
|
||||
raise Exception(
|
||||
f"Failed to download {url} (status: {response.status_code})"
|
||||
)
|
||||
|
||||
return all_projects
|
||||
|
||||
def _metadata_for_project(self, project: Dict) -> Dict:
|
||||
"""Gets project metadata for all files"""
|
||||
project_id = project.get("id")
|
||||
|
||||
url = f"{self.api}/projects/{project_id}/artifacts/latest"
|
||||
all_artifacts = []
|
||||
|
||||
while url:
|
||||
response = requests.request(
|
||||
"GET",
|
||||
url,
|
||||
headers={"Authorization": f"Bearer {self.access_token}"},
|
||||
data={},
|
||||
)
|
||||
if response.ok:
|
||||
data = response.json()
|
||||
all_artifacts.extend(data["artifacts"])
|
||||
url = data.get("next", None)
|
||||
else:
|
||||
raise Exception(
|
||||
f"Failed to download {url} (status: {response.status_code})"
|
||||
)
|
||||
|
||||
per_file_metadata = {}
|
||||
for artifact in all_artifacts:
|
||||
artifact_name = artifact.get("name")
|
||||
artifact_url = artifact.get("url")
|
||||
artifact_doc = artifact.get("document")
|
||||
|
||||
if artifact_name == f"{project_id}.xml" and artifact_url and artifact_doc:
|
||||
doc_id = artifact_doc["id"]
|
||||
metadata: Dict = {}
|
||||
|
||||
# the evaluated XML for each document is named after the project
|
||||
response = requests.request(
|
||||
"GET",
|
||||
f"{artifact_url}/content",
|
||||
headers={"Authorization": f"Bearer {self.access_token}"},
|
||||
data={},
|
||||
)
|
||||
|
||||
if response.ok:
|
||||
try:
|
||||
from lxml import etree
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import lxml python package. "
|
||||
"Please install it with `pip install lxml`."
|
||||
)
|
||||
artifact_tree = etree.parse(io.BytesIO(response.content))
|
||||
artifact_root = artifact_tree.getroot()
|
||||
ns = artifact_root.nsmap
|
||||
entries = artifact_root.xpath("//wp:Entry", namespaces=ns)
|
||||
for entry in entries:
|
||||
heading = entry.xpath("./wp:Heading", namespaces=ns)[0].text
|
||||
value = " ".join(
|
||||
entry.xpath("./wp:Value", namespaces=ns)[0].itertext()
|
||||
).strip()
|
||||
metadata[heading] = value
|
||||
per_file_metadata[doc_id] = metadata
|
||||
else:
|
||||
raise Exception(
|
||||
f"Failed to download {artifact_url}/content "
|
||||
+ "(status: {response.status_code})"
|
||||
)
|
||||
|
||||
return per_file_metadata
|
||||
|
||||
def _load_chunks_for_document(
|
||||
self, docset_id: str, document: Dict, doc_metadata: Optional[Dict] = None
|
||||
) -> List[Document]:
|
||||
"""Load chunks for a document."""
|
||||
document_id = document["id"]
|
||||
url = f"{self.api}/docsets/{docset_id}/documents/{document_id}/dgml"
|
||||
|
||||
response = requests.request(
|
||||
"GET",
|
||||
url,
|
||||
headers={"Authorization": f"Bearer {self.access_token}"},
|
||||
data={},
|
||||
)
|
||||
|
||||
if response.ok:
|
||||
return self._parse_dgml(document, response.content, doc_metadata)
|
||||
else:
|
||||
raise Exception(
|
||||
f"Failed to download {url} (status: {response.status_code})"
|
||||
)
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
"""Load documents."""
|
||||
chunks: List[Document] = []
|
||||
|
||||
if self.access_token and self.docset_id:
|
||||
# remote mode
|
||||
_document_details = self._document_details_for_docset_id(self.docset_id)
|
||||
if self.document_ids:
|
||||
_document_details = [
|
||||
d for d in _document_details if d["id"] in self.document_ids
|
||||
]
|
||||
|
||||
_project_details = self._project_details_for_docset_id(self.docset_id)
|
||||
combined_project_metadata = {}
|
||||
if _project_details:
|
||||
# if there are any projects for this docset, load project metadata
|
||||
for project in _project_details:
|
||||
metadata = self._metadata_for_project(project)
|
||||
combined_project_metadata.update(metadata)
|
||||
|
||||
for doc in _document_details:
|
||||
doc_metadata = combined_project_metadata.get(doc["id"])
|
||||
chunks += self._load_chunks_for_document(
|
||||
self.docset_id, doc, doc_metadata
|
||||
)
|
||||
elif self.file_paths:
|
||||
# local mode (for integration testing, or pre-downloaded XML)
|
||||
for path in self.file_paths:
|
||||
with open(path, "rb") as file:
|
||||
chunks += self._parse_dgml(
|
||||
{
|
||||
DOCUMENT_ID_KEY: path.name,
|
||||
DOCUMENT_NAME_KEY: path.name,
|
||||
},
|
||||
file.read(),
|
||||
)
|
||||
|
||||
return chunks
|
||||
@@ -75,7 +75,6 @@ class GitLoader(BaseLoader):
|
||||
continue
|
||||
|
||||
metadata = {
|
||||
"source": rel_file_path,
|
||||
"file_path": rel_file_path,
|
||||
"file_name": item.name,
|
||||
"file_type": file_type,
|
||||
|
||||
@@ -1,15 +1,8 @@
|
||||
from langchain.document_loaders.parsers.pdf import (
|
||||
PDFMinerParser,
|
||||
PDFPlumberParser,
|
||||
PyMuPDFParser,
|
||||
PyPDFium2Parser,
|
||||
PyPDFParser,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"PyPDFParser",
|
||||
"PDFMinerParser",
|
||||
"PyMuPDFParser",
|
||||
"PyPDFium2Parser",
|
||||
"PDFPlumberParser",
|
||||
]
|
||||
__all__ = ["PyPDFParser", "PDFMinerParser", "PyMuPDFParser", "PyPDFium2Parser"]
|
||||
|
||||
@@ -93,56 +93,9 @@ class PyPDFium2Parser(BaseBlobParser):
|
||||
"""Lazily parse the blob."""
|
||||
import pypdfium2
|
||||
|
||||
# pypdfium2 is really finicky with respect to closing things,
|
||||
# if done incorrectly creates seg faults.
|
||||
with blob.as_bytes_io() as file_path:
|
||||
pdf_reader = pypdfium2.PdfDocument(file_path, autoclose=True)
|
||||
try:
|
||||
for page_number, page in enumerate(pdf_reader):
|
||||
text_page = page.get_textpage()
|
||||
content = text_page.get_text_range()
|
||||
text_page.close()
|
||||
page.close()
|
||||
metadata = {"source": blob.source, "page": page_number}
|
||||
yield Document(page_content=content, metadata=metadata)
|
||||
finally:
|
||||
pdf_reader.close()
|
||||
|
||||
|
||||
class PDFPlumberParser(BaseBlobParser):
|
||||
"""Parse PDFs with PDFPlumber."""
|
||||
|
||||
def __init__(self, text_kwargs: Optional[Mapping[str, Any]] = None) -> None:
|
||||
"""Initialize the parser.
|
||||
|
||||
Args:
|
||||
text_kwargs: Keyword arguments to pass to ``pdfplumber.Page.extract_text()``
|
||||
"""
|
||||
self.text_kwargs = text_kwargs or {}
|
||||
|
||||
def lazy_parse(self, blob: Blob) -> Iterator[Document]:
|
||||
"""Lazily parse the blob."""
|
||||
import pdfplumber
|
||||
|
||||
with blob.as_bytes_io() as file_path:
|
||||
doc = pdfplumber.open(file_path) # open document
|
||||
|
||||
yield from [
|
||||
Document(
|
||||
page_content=page.extract_text(**self.text_kwargs),
|
||||
metadata=dict(
|
||||
{
|
||||
"source": blob.source,
|
||||
"file_path": blob.source,
|
||||
"page": page.page_number,
|
||||
"total_pages": len(doc.pages),
|
||||
},
|
||||
**{
|
||||
k: doc.metadata[k]
|
||||
for k in doc.metadata
|
||||
if type(doc.metadata[k]) in [str, int]
|
||||
},
|
||||
),
|
||||
)
|
||||
for page in doc.pages
|
||||
]
|
||||
with blob.as_bytes_io() as f:
|
||||
pdf_reader = pypdfium2.PdfDocument(f)
|
||||
for page_number, page in enumerate(pdf_reader):
|
||||
content = page.get_textpage().get_text_range()
|
||||
metadata = {"source": blob.source, "page": page_number}
|
||||
yield Document(page_content=content, metadata=metadata)
|
||||
|
||||
@@ -17,7 +17,6 @@ from langchain.document_loaders.base import BaseLoader
|
||||
from langchain.document_loaders.blob_loaders import Blob
|
||||
from langchain.document_loaders.parsers.pdf import (
|
||||
PDFMinerParser,
|
||||
PDFPlumberParser,
|
||||
PyMuPDFParser,
|
||||
PyPDFium2Parser,
|
||||
PyPDFParser,
|
||||
@@ -366,26 +365,96 @@ class MathpixPDFLoader(BasePDFLoader):
|
||||
|
||||
|
||||
class PDFPlumberLoader(BasePDFLoader):
|
||||
"""Loader that uses pdfplumber to load PDF files."""
|
||||
"""Loader that uses PDFPlumber to load PDF files."""
|
||||
|
||||
def __init__(
|
||||
self, file_path: str, text_kwargs: Optional[Mapping[str, Any]] = None
|
||||
) -> None:
|
||||
self,
|
||||
file_path: str,
|
||||
text_kwargs: Mapping[str, Any] = {"x_tolerance": 3, "y_tolerance": 3},
|
||||
word_kwargs: Mapping[str, Any] = {"x_tolerance": 3, "y_tolerance": 3},
|
||||
image_kwargs: Mapping[str, Any] = {"resolution": None},
|
||||
):
|
||||
"""Initialize with file path."""
|
||||
try:
|
||||
import pdfplumber # noqa:F401
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"pdfplumber package not found, please install it with "
|
||||
"PDFPlumber package not found, please install it with "
|
||||
"`pip install pdfplumber`"
|
||||
)
|
||||
|
||||
super().__init__(file_path)
|
||||
self.text_kwargs = text_kwargs or {}
|
||||
self.text_kwargs = text_kwargs
|
||||
self.word_kwargs = word_kwargs
|
||||
self.image_kwargs = image_kwargs
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
"""Load file."""
|
||||
import pdfplumber
|
||||
|
||||
parser = PDFPlumberParser(text_kwargs=self.text_kwargs)
|
||||
blob = Blob.from_path(self.file_path)
|
||||
return parser.parse(blob)
|
||||
doc = pdfplumber.open(self.file_path)
|
||||
file_path = self.source
|
||||
|
||||
return [
|
||||
Document(
|
||||
page_content=page.extract_text(**self.text_kwargs).encode("utf-8"),
|
||||
metadata=dict(
|
||||
{
|
||||
"source": file_path,
|
||||
"file_path": file_path,
|
||||
"page_number": page.page_number,
|
||||
"total_pages": len(doc.pages),
|
||||
},
|
||||
**{
|
||||
k: doc.metadata[k]
|
||||
for k in doc.metadata
|
||||
if type(doc.metadata[k]) in [str, int]
|
||||
},
|
||||
),
|
||||
)
|
||||
for page in doc.pages
|
||||
]
|
||||
|
||||
def annotate_and_load(self, save_path: str) -> List[Document]:
|
||||
"""Annotate/save pdf file using pdfplumber's visual debudding and load file."""
|
||||
import pdfplumber
|
||||
|
||||
path = Path(save_path)
|
||||
path.mkdir(exist_ok=True, parents=True)
|
||||
doc = pdfplumber.open(self.file_path)
|
||||
file_path = self.source
|
||||
|
||||
# get annotated PIL.Images
|
||||
annotated_imgs = []
|
||||
for page in doc.pages:
|
||||
im = page.to_image(**self.image_kwargs)
|
||||
annotated_imgs.append(
|
||||
im.draw_rects(page.extract_words(**self.word_kwargs)).annotated
|
||||
)
|
||||
# save as ranamed pdf
|
||||
file_name = Path(self.file_path).stem
|
||||
annotated_imgs[0].save(
|
||||
str(path / "{}_annotated.pdf".format(file_name)),
|
||||
save_all=True,
|
||||
append_images=annotated_imgs[1:],
|
||||
)
|
||||
|
||||
return [
|
||||
Document(
|
||||
page_content=page.extract_text(**self.text_kwargs).encode("utf-8"),
|
||||
metadata=dict(
|
||||
{
|
||||
"source": file_path,
|
||||
"file_path": file_path,
|
||||
"page_number": page.page_number,
|
||||
"total_pages": len(doc.pages),
|
||||
},
|
||||
**{
|
||||
k: doc.metadata[k]
|
||||
for k in doc.metadata
|
||||
if type(doc.metadata[k]) in [str, int]
|
||||
},
|
||||
),
|
||||
)
|
||||
for page in doc.pages
|
||||
]
|
||||
|
||||
@@ -32,12 +32,11 @@ class SitemapLoader(WebBaseLoader):
|
||||
blocksize: Optional[int] = None,
|
||||
blocknum: int = 0,
|
||||
meta_function: Optional[Callable] = None,
|
||||
is_local: bool = False,
|
||||
):
|
||||
"""Initialize with webpage path and optional filter URLs.
|
||||
|
||||
Args:
|
||||
web_path: url of the sitemap. can also be a local path
|
||||
web_path: url of the sitemap
|
||||
filter_urls: list of strings or regexes that will be applied to filter the
|
||||
urls that are parsed and loaded
|
||||
parsing_function: Function to parse bs4.Soup output
|
||||
@@ -46,7 +45,6 @@ class SitemapLoader(WebBaseLoader):
|
||||
meta_function: Function to parse bs4.Soup output for metadata
|
||||
remember when setting this method to also copy metadata["loc"]
|
||||
to metadata["source"] if you are using this field
|
||||
is_local: whether the sitemap is a local file
|
||||
"""
|
||||
|
||||
if blocksize is not None and blocksize < 1:
|
||||
@@ -69,7 +67,6 @@ class SitemapLoader(WebBaseLoader):
|
||||
self.meta_function = meta_function or _default_meta_function
|
||||
self.blocksize = blocksize
|
||||
self.blocknum = blocknum
|
||||
self.is_local = is_local
|
||||
|
||||
def parse_sitemap(self, soup: Any) -> List[dict]:
|
||||
"""Parse sitemap xml and load into a list of dicts."""
|
||||
@@ -103,17 +100,7 @@ class SitemapLoader(WebBaseLoader):
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
"""Load sitemap."""
|
||||
if self.is_local:
|
||||
try:
|
||||
import bs4
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"bs4 package not found, please install it with " "`pip install bs4`"
|
||||
)
|
||||
fp = open(self.web_path)
|
||||
soup = bs4.BeautifulSoup(fp, "xml")
|
||||
else:
|
||||
soup = self.scrape("xml")
|
||||
soup = self.scrape("xml")
|
||||
|
||||
els = self.parse_sitemap(soup)
|
||||
|
||||
|
||||
@@ -1,18 +1,10 @@
|
||||
"""Loader that loads Telegram chat json dump."""
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Dict, List, Optional, Union
|
||||
from typing import List
|
||||
|
||||
from langchain.docstore.document import Document
|
||||
from langchain.document_loaders.base import BaseLoader
|
||||
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import pandas as pd
|
||||
from telethon.hints import EntityLike
|
||||
|
||||
|
||||
def concatenate_rows(row: dict) -> str:
|
||||
@@ -23,7 +15,7 @@ def concatenate_rows(row: dict) -> str:
|
||||
return f"{sender} on {date}: {text}\n\n"
|
||||
|
||||
|
||||
class TelegramChatFileLoader(BaseLoader):
|
||||
class TelegramChatLoader(BaseLoader):
|
||||
"""Loader that loads Telegram chat json directory dump."""
|
||||
|
||||
def __init__(self, path: str):
|
||||
@@ -45,209 +37,3 @@ class TelegramChatFileLoader(BaseLoader):
|
||||
metadata = {"source": str(p)}
|
||||
|
||||
return [Document(page_content=text, metadata=metadata)]
|
||||
|
||||
|
||||
def text_to_docs(text: Union[str, List[str]]) -> List[Document]:
|
||||
"""Converts a string or list of strings to a list of Documents with metadata."""
|
||||
if isinstance(text, str):
|
||||
# Take a single string as one page
|
||||
text = [text]
|
||||
page_docs = [Document(page_content=page) for page in text]
|
||||
|
||||
# Add page numbers as metadata
|
||||
for i, doc in enumerate(page_docs):
|
||||
doc.metadata["page"] = i + 1
|
||||
|
||||
# Split pages into chunks
|
||||
doc_chunks = []
|
||||
|
||||
for doc in page_docs:
|
||||
text_splitter = RecursiveCharacterTextSplitter(
|
||||
chunk_size=800,
|
||||
separators=["\n\n", "\n", ".", "!", "?", ",", " ", ""],
|
||||
chunk_overlap=20,
|
||||
)
|
||||
chunks = text_splitter.split_text(doc.page_content)
|
||||
for i, chunk in enumerate(chunks):
|
||||
doc = Document(
|
||||
page_content=chunk, metadata={"page": doc.metadata["page"], "chunk": i}
|
||||
)
|
||||
# Add sources a metadata
|
||||
doc.metadata["source"] = f"{doc.metadata['page']}-{doc.metadata['chunk']}"
|
||||
doc_chunks.append(doc)
|
||||
return doc_chunks
|
||||
|
||||
|
||||
class TelegramChatApiLoader(BaseLoader):
|
||||
"""Loader that loads Telegram chat json directory dump."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
chat_entity: Optional[EntityLike] = None,
|
||||
api_id: Optional[int] = None,
|
||||
api_hash: Optional[str] = None,
|
||||
username: Optional[str] = None,
|
||||
file_path: str = "telegram_data.json",
|
||||
):
|
||||
"""Initialize with API parameters."""
|
||||
self.chat_entity = chat_entity
|
||||
self.api_id = api_id
|
||||
self.api_hash = api_hash
|
||||
self.username = username
|
||||
self.file_path = file_path
|
||||
|
||||
async def fetch_data_from_telegram(self) -> None:
|
||||
"""Fetch data from Telegram API and save it as a JSON file."""
|
||||
from telethon.sync import TelegramClient
|
||||
|
||||
data = []
|
||||
async with TelegramClient(self.username, self.api_id, self.api_hash) as client:
|
||||
async for message in client.iter_messages(self.chat_entity):
|
||||
is_reply = message.reply_to is not None
|
||||
reply_to_id = message.reply_to.reply_to_msg_id if is_reply else None
|
||||
data.append(
|
||||
{
|
||||
"sender_id": message.sender_id,
|
||||
"text": message.text,
|
||||
"date": message.date.isoformat(),
|
||||
"message.id": message.id,
|
||||
"is_reply": is_reply,
|
||||
"reply_to_id": reply_to_id,
|
||||
}
|
||||
)
|
||||
|
||||
with open(self.file_path, "w", encoding="utf-8") as f:
|
||||
json.dump(data, f, ensure_ascii=False, indent=4)
|
||||
|
||||
def _get_message_threads(self, data: pd.DataFrame) -> dict:
|
||||
"""Create a dictionary of message threads from the given data.
|
||||
|
||||
Args:
|
||||
data (pd.DataFrame): A DataFrame containing the conversation \
|
||||
data with columns:
|
||||
- message.sender_id
|
||||
- text
|
||||
- date
|
||||
- message.id
|
||||
- is_reply
|
||||
- reply_to_id
|
||||
|
||||
Returns:
|
||||
dict: A dictionary where the key is the parent message ID and \
|
||||
the value is a list of message IDs in ascending order.
|
||||
"""
|
||||
|
||||
def find_replies(parent_id: int, reply_data: pd.DataFrame) -> List[int]:
|
||||
"""
|
||||
Recursively find all replies to a given parent message ID.
|
||||
|
||||
Args:
|
||||
parent_id (int): The parent message ID.
|
||||
reply_data (pd.DataFrame): A DataFrame containing reply messages.
|
||||
|
||||
Returns:
|
||||
list: A list of message IDs that are replies to the parent message ID.
|
||||
"""
|
||||
# Find direct replies to the parent message ID
|
||||
direct_replies = reply_data[reply_data["reply_to_id"] == parent_id][
|
||||
"message.id"
|
||||
].tolist()
|
||||
|
||||
# Recursively find replies to the direct replies
|
||||
all_replies = []
|
||||
for reply_id in direct_replies:
|
||||
all_replies += [reply_id] + find_replies(reply_id, reply_data)
|
||||
|
||||
return all_replies
|
||||
|
||||
# Filter out parent messages
|
||||
parent_messages = data[~data["is_reply"]]
|
||||
|
||||
# Filter out reply messages and drop rows with NaN in 'reply_to_id'
|
||||
reply_messages = data[data["is_reply"]].dropna(subset=["reply_to_id"])
|
||||
|
||||
# Convert 'reply_to_id' to integer
|
||||
reply_messages["reply_to_id"] = reply_messages["reply_to_id"].astype(int)
|
||||
|
||||
# Create a dictionary of message threads with parent message IDs as keys and \
|
||||
# lists of reply message IDs as values
|
||||
message_threads = {
|
||||
parent_id: [parent_id] + find_replies(parent_id, reply_messages)
|
||||
for parent_id in parent_messages["message.id"]
|
||||
}
|
||||
|
||||
return message_threads
|
||||
|
||||
def _combine_message_texts(
|
||||
self, message_threads: Dict[int, List[int]], data: pd.DataFrame
|
||||
) -> str:
|
||||
"""
|
||||
Combine the message texts for each parent message ID based \
|
||||
on the list of message threads.
|
||||
|
||||
Args:
|
||||
message_threads (dict): A dictionary where the key is the parent message \
|
||||
ID and the value is a list of message IDs in ascending order.
|
||||
data (pd.DataFrame): A DataFrame containing the conversation data:
|
||||
- message.sender_id
|
||||
- text
|
||||
- date
|
||||
- message.id
|
||||
- is_reply
|
||||
- reply_to_id
|
||||
|
||||
Returns:
|
||||
str: A combined string of message texts sorted by date.
|
||||
"""
|
||||
combined_text = ""
|
||||
|
||||
# Iterate through sorted parent message IDs
|
||||
for parent_id, message_ids in message_threads.items():
|
||||
# Get the message texts for the message IDs and sort them by date
|
||||
message_texts = (
|
||||
data[data["message.id"].isin(message_ids)]
|
||||
.sort_values(by="date")["text"]
|
||||
.tolist()
|
||||
)
|
||||
message_texts = [str(elem) for elem in message_texts]
|
||||
|
||||
# Combine the message texts
|
||||
combined_text += " ".join(message_texts) + ".\n"
|
||||
|
||||
return combined_text.strip()
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
"""Load documents."""
|
||||
|
||||
if self.chat_entity is not None:
|
||||
try:
|
||||
import nest_asyncio
|
||||
|
||||
nest_asyncio.apply()
|
||||
asyncio.run(self.fetch_data_from_telegram())
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"""`nest_asyncio` package not found.
|
||||
please install with `pip install nest_asyncio`
|
||||
"""
|
||||
)
|
||||
|
||||
p = Path(self.file_path)
|
||||
|
||||
with open(p, encoding="utf8") as f:
|
||||
d = json.load(f)
|
||||
try:
|
||||
import pandas as pd
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"""`pandas` package not found.
|
||||
please install with `pip install pandas`
|
||||
"""
|
||||
)
|
||||
normalized_messages = pd.json_normalize(d)
|
||||
df = pd.DataFrame(normalized_messages)
|
||||
|
||||
message_threads = self._get_message_threads(df)
|
||||
combined_texts = self._combine_message_texts(message_threads, df)
|
||||
|
||||
return text_to_docs(combined_texts)
|
||||
|
||||
@@ -68,19 +68,17 @@ class WebBaseLoader(BaseLoader):
|
||||
"bs4 package not found, please install it with " "`pip install bs4`"
|
||||
)
|
||||
|
||||
headers = header_template or default_header_template
|
||||
if not headers.get("User-Agent"):
|
||||
try:
|
||||
from fake_useragent import UserAgent
|
||||
try:
|
||||
from fake_useragent import UserAgent
|
||||
|
||||
headers["User-Agent"] = UserAgent().random
|
||||
except ImportError:
|
||||
logger.info(
|
||||
"fake_useragent not found, using default user agent."
|
||||
"To get a realistic header for requests, "
|
||||
"`pip install fake_useragent`."
|
||||
)
|
||||
self.session.headers = dict(headers)
|
||||
headers = header_template or default_header_template
|
||||
headers["User-Agent"] = UserAgent().random
|
||||
self.session.headers = dict(headers)
|
||||
except ImportError:
|
||||
logger.info(
|
||||
"fake_useragent not found, using default user agent. "
|
||||
"To get a realistic header for requests, `pip install fake_useragent`."
|
||||
)
|
||||
|
||||
@property
|
||||
def web_path(self) -> str:
|
||||
|
||||
@@ -4,7 +4,6 @@ from __future__ import annotations
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
from pydantic import root_validator
|
||||
from pydantic.dataclasses import dataclass
|
||||
@@ -98,47 +97,6 @@ class GoogleApiClient:
|
||||
return creds
|
||||
|
||||
|
||||
ALLOWED_SCHEMAS = {"http", "https"}
|
||||
ALLOWED_NETLOCK = {
|
||||
"youtu.be",
|
||||
"m.youtube.com",
|
||||
"youtube.com",
|
||||
"www.youtube.com",
|
||||
"www.youtube-nocookie.com",
|
||||
"vid.plus",
|
||||
}
|
||||
|
||||
|
||||
def _parse_video_id(url: str) -> Optional[str]:
|
||||
"""Parse a youtube url and return the video id if valid, otherwise None."""
|
||||
parsed_url = urlparse(url)
|
||||
|
||||
if parsed_url.scheme not in ALLOWED_SCHEMAS:
|
||||
return None
|
||||
|
||||
if parsed_url.netloc not in ALLOWED_NETLOCK:
|
||||
return None
|
||||
|
||||
path = parsed_url.path
|
||||
|
||||
if path.endswith("/watch"):
|
||||
query = parsed_url.query
|
||||
parsed_query = parse_qs(query)
|
||||
if "v" in parsed_query:
|
||||
ids = parsed_query["v"]
|
||||
video_id = ids if isinstance(ids, str) else ids[0]
|
||||
else:
|
||||
return None
|
||||
else:
|
||||
path = parsed_url.path.lstrip("/")
|
||||
video_id = path.split("/")[-1]
|
||||
|
||||
if len(video_id) != 11: # Video IDs are 11 characters long
|
||||
return None
|
||||
|
||||
return video_id
|
||||
|
||||
|
||||
class YoutubeLoader(BaseLoader):
|
||||
"""Loader that loads Youtube transcripts."""
|
||||
|
||||
@@ -155,20 +113,10 @@ class YoutubeLoader(BaseLoader):
|
||||
self.language = language
|
||||
self.continue_on_failure = continue_on_failure
|
||||
|
||||
@staticmethod
|
||||
def extract_video_id(youtube_url: str) -> str:
|
||||
"""Extract video id from common YT urls."""
|
||||
video_id = _parse_video_id(youtube_url)
|
||||
if not video_id:
|
||||
raise ValueError(
|
||||
f"Could not determine the video ID for the URL {youtube_url}"
|
||||
)
|
||||
return video_id
|
||||
|
||||
@classmethod
|
||||
def from_youtube_url(cls, youtube_url: str, **kwargs: Any) -> YoutubeLoader:
|
||||
"""Given youtube URL, load video."""
|
||||
video_id = cls.extract_video_id(youtube_url)
|
||||
video_id = youtube_url.split("youtube.com/watch?v=")[-1]
|
||||
return cls(video_id, **kwargs)
|
||||
|
||||
def load(self) -> List[Document]:
|
||||
|
||||
@@ -18,13 +18,11 @@ class CohereEmbeddings(BaseModel, Embeddings):
|
||||
.. code-block:: python
|
||||
|
||||
from langchain.embeddings import CohereEmbeddings
|
||||
cohere = CohereEmbeddings(
|
||||
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
|
||||
)
|
||||
cohere = CohereEmbeddings(model="medium", cohere_api_key="my-api-key")
|
||||
"""
|
||||
|
||||
client: Any #: :meta private:
|
||||
model: str = "embed-english-v2.0"
|
||||
model: str = "large"
|
||||
"""Model name to use."""
|
||||
|
||||
truncate: Optional[str] = None
|
||||
|
||||
@@ -1,64 +1,16 @@
|
||||
"""Wrapper arround Google's PaLM Embeddings APIs."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any, Callable, Dict, List, Optional
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import BaseModel, root_validator
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
retry_if_exception_type,
|
||||
stop_after_attempt,
|
||||
wait_exponential,
|
||||
)
|
||||
|
||||
from langchain.embeddings.base import Embeddings
|
||||
from langchain.utils import get_from_dict_or_env
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _create_retry_decorator() -> Callable[[Any], Any]:
|
||||
"""Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions"""
|
||||
import google.api_core.exceptions
|
||||
|
||||
multiplier = 2
|
||||
min_seconds = 1
|
||||
max_seconds = 60
|
||||
max_retries = 10
|
||||
|
||||
return retry(
|
||||
reraise=True,
|
||||
stop=stop_after_attempt(max_retries),
|
||||
wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),
|
||||
retry=(
|
||||
retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)
|
||||
| retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)
|
||||
| retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)
|
||||
),
|
||||
before_sleep=before_sleep_log(logger, logging.WARNING),
|
||||
)
|
||||
|
||||
|
||||
def embed_with_retry(
|
||||
embeddings: GooglePalmEmbeddings, *args: Any, **kwargs: Any
|
||||
) -> Any:
|
||||
"""Use tenacity to retry the completion call."""
|
||||
retry_decorator = _create_retry_decorator()
|
||||
|
||||
@retry_decorator
|
||||
def _embed_with_retry(*args: Any, **kwargs: Any) -> Any:
|
||||
return embeddings.client.generate_embeddings(*args, **kwargs)
|
||||
|
||||
return _embed_with_retry(*args, **kwargs)
|
||||
|
||||
|
||||
class GooglePalmEmbeddings(BaseModel, Embeddings):
|
||||
client: Any
|
||||
google_api_key: Optional[str]
|
||||
model_name: str = "models/embedding-gecko-001"
|
||||
"""Model name to use."""
|
||||
|
||||
@root_validator()
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
@@ -82,5 +34,5 @@ class GooglePalmEmbeddings(BaseModel, Embeddings):
|
||||
|
||||
def embed_query(self, text: str) -> List[float]:
|
||||
"""Embed query text."""
|
||||
embedding = embed_with_retry(self, self.model_name, text)
|
||||
embedding = self.client.generate_embeddings(self.model_name, text)
|
||||
return embedding["embedding"]
|
||||
|
||||
@@ -53,9 +53,6 @@ class LlamaCppEmbeddings(BaseModel, Embeddings):
|
||||
"""Number of tokens to process in parallel.
|
||||
Should be a number between 1 and n_ctx."""
|
||||
|
||||
n_gpu_layers: Optional[int] = Field(None, alias="n_gpu_layers")
|
||||
"""Number of layers to be loaded into gpu memory. Default None."""
|
||||
|
||||
class Config:
|
||||
"""Configuration for this pydantic object."""
|
||||
|
||||
@@ -65,37 +62,40 @@ class LlamaCppEmbeddings(BaseModel, Embeddings):
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that llama-cpp-python library is installed."""
|
||||
model_path = values["model_path"]
|
||||
model_param_names = [
|
||||
"n_ctx",
|
||||
"n_parts",
|
||||
"seed",
|
||||
"f16_kv",
|
||||
"logits_all",
|
||||
"vocab_only",
|
||||
"use_mlock",
|
||||
"n_threads",
|
||||
"n_batch",
|
||||
]
|
||||
model_params = {k: values[k] for k in model_param_names}
|
||||
# For backwards compatibility, only include if non-null.
|
||||
if values["n_gpu_layers"] is not None:
|
||||
model_params["n_gpu_layers"] = values["n_gpu_layers"]
|
||||
n_ctx = values["n_ctx"]
|
||||
n_parts = values["n_parts"]
|
||||
seed = values["seed"]
|
||||
f16_kv = values["f16_kv"]
|
||||
logits_all = values["logits_all"]
|
||||
vocab_only = values["vocab_only"]
|
||||
use_mlock = values["use_mlock"]
|
||||
n_threads = values["n_threads"]
|
||||
n_batch = values["n_batch"]
|
||||
|
||||
try:
|
||||
from llama_cpp import Llama
|
||||
|
||||
values["client"] = Llama(model_path, embedding=True, **model_params)
|
||||
values["client"] = Llama(
|
||||
model_path=model_path,
|
||||
n_ctx=n_ctx,
|
||||
n_parts=n_parts,
|
||||
seed=seed,
|
||||
f16_kv=f16_kv,
|
||||
logits_all=logits_all,
|
||||
vocab_only=vocab_only,
|
||||
use_mlock=use_mlock,
|
||||
n_threads=n_threads,
|
||||
n_batch=n_batch,
|
||||
embedding=True,
|
||||
)
|
||||
except ImportError:
|
||||
raise ModuleNotFoundError(
|
||||
"Could not import llama-cpp-python library. "
|
||||
"Please install the llama-cpp-python library to "
|
||||
"use this embedding model: pip install llama-cpp-python"
|
||||
)
|
||||
except Exception as e:
|
||||
raise ValueError(
|
||||
f"Could not load Llama model from path: {model_path}. "
|
||||
f"Received error {e}"
|
||||
)
|
||||
except Exception:
|
||||
raise NameError(f"Could not load Llama model from path: {model_path}")
|
||||
|
||||
return values
|
||||
|
||||
|
||||
@@ -9,7 +9,6 @@ from typing import (
|
||||
List,
|
||||
Literal,
|
||||
Optional,
|
||||
Sequence,
|
||||
Set,
|
||||
Tuple,
|
||||
Union,
|
||||
@@ -116,7 +115,7 @@ class OpenAIEmbeddings(BaseModel, Embeddings):
|
||||
openai_api_key: Optional[str] = None
|
||||
openai_organization: Optional[str] = None
|
||||
allowed_special: Union[Literal["all"], Set[str]] = set()
|
||||
disallowed_special: Union[Literal["all"], Set[str], Sequence[str]] = "all"
|
||||
disallowed_special: Union[Literal["all"], Set[str], Tuple[()]] = "all"
|
||||
chunk_size: int = 1000
|
||||
"""Maximum number of texts to embed in each batch"""
|
||||
max_retries: int = 6
|
||||
|
||||
@@ -1,16 +0,0 @@
|
||||
import platform
|
||||
from functools import lru_cache
|
||||
|
||||
|
||||
@lru_cache(maxsize=1)
|
||||
def get_runtime_environment() -> dict:
|
||||
"""Get information about the environment."""
|
||||
# Lazy import to avoid circular imports
|
||||
from langchain import __version__
|
||||
|
||||
return {
|
||||
"library_version": __version__,
|
||||
"platform": platform.platform(),
|
||||
"runtime": "python",
|
||||
"runtime_version": platform.python_version(),
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -123,7 +123,7 @@ class GenerativeAgentMemory(BaseMemory):
|
||||
logger.info(f"Importance score: {score}")
|
||||
match = re.search(r"^\D*(\d+)", score)
|
||||
if match:
|
||||
return (float(match.group(1)) / 10) * self.importance_weight
|
||||
return (float(score[0]) / 10) * self.importance_weight
|
||||
else:
|
||||
return 0.0
|
||||
|
||||
|
||||
@@ -1,17 +1,9 @@
|
||||
"""Wrapper arround Google's PaLM Text APIs."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any, Callable, Dict, List, Optional
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import BaseModel, root_validator
|
||||
from tenacity import (
|
||||
before_sleep_log,
|
||||
retry,
|
||||
retry_if_exception_type,
|
||||
stop_after_attempt,
|
||||
wait_exponential,
|
||||
)
|
||||
|
||||
from langchain.callbacks.manager import (
|
||||
AsyncCallbackManagerForLLMRun,
|
||||
@@ -21,44 +13,6 @@ from langchain.llms import BaseLLM
|
||||
from langchain.schema import Generation, LLMResult
|
||||
from langchain.utils import get_from_dict_or_env
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _create_retry_decorator() -> Callable[[Any], Any]:
|
||||
"""Returns a tenacity retry decorator, preconfigured to handle PaLM exceptions"""
|
||||
try:
|
||||
import google.api_core.exceptions
|
||||
except ImportError:
|
||||
raise ImportError()
|
||||
|
||||
multiplier = 2
|
||||
min_seconds = 1
|
||||
max_seconds = 60
|
||||
max_retries = 10
|
||||
|
||||
return retry(
|
||||
reraise=True,
|
||||
stop=stop_after_attempt(max_retries),
|
||||
wait=wait_exponential(multiplier=multiplier, min=min_seconds, max=max_seconds),
|
||||
retry=(
|
||||
retry_if_exception_type(google.api_core.exceptions.ResourceExhausted)
|
||||
| retry_if_exception_type(google.api_core.exceptions.ServiceUnavailable)
|
||||
| retry_if_exception_type(google.api_core.exceptions.GoogleAPIError)
|
||||
),
|
||||
before_sleep=before_sleep_log(logger, logging.WARNING),
|
||||
)
|
||||
|
||||
|
||||
def generate_with_retry(llm: GooglePalm, **kwargs: Any) -> Any:
|
||||
"""Use tenacity to retry the completion call."""
|
||||
retry_decorator = _create_retry_decorator()
|
||||
|
||||
@retry_decorator
|
||||
def _generate_with_retry(**kwargs: Any) -> Any:
|
||||
return llm.client.generate_text(**kwargs)
|
||||
|
||||
return _generate_with_retry(**kwargs)
|
||||
|
||||
|
||||
def _strip_erroneous_leading_spaces(text: str) -> str:
|
||||
"""Strip erroneous leading spaces from text.
|
||||
@@ -131,8 +85,7 @@ class GooglePalm(BaseLLM, BaseModel):
|
||||
) -> LLMResult:
|
||||
generations = []
|
||||
for prompt in prompts:
|
||||
completion = generate_with_retry(
|
||||
self,
|
||||
completion = self.client.generate_text(
|
||||
model=self.model_name,
|
||||
prompt=prompt,
|
||||
stop_sequences=stop,
|
||||
|
||||
@@ -9,7 +9,7 @@ from langchain.llms.base import LLM
|
||||
from langchain.llms.utils import enforce_stop_tokens
|
||||
from langchain.utils import get_from_dict_or_env
|
||||
|
||||
VALID_TASKS = ("text2text-generation", "text-generation", "summarization")
|
||||
VALID_TASKS = ("text2text-generation", "text-generation")
|
||||
|
||||
|
||||
class HuggingFaceEndpoint(LLM):
|
||||
@@ -37,8 +37,7 @@ class HuggingFaceEndpoint(LLM):
|
||||
endpoint_url: str = ""
|
||||
"""Endpoint URL to use."""
|
||||
task: Optional[str] = None
|
||||
"""Task to call the model with.
|
||||
Should be a task that returns `generated_text` or `summary_text`."""
|
||||
"""Task to call the model with. Should be a task that returns `generated_text`."""
|
||||
model_kwargs: Optional[dict] = None
|
||||
"""Key word arguments to pass to the model."""
|
||||
|
||||
@@ -139,8 +138,6 @@ class HuggingFaceEndpoint(LLM):
|
||||
text = generated_text[0]["generated_text"][len(prompt) :]
|
||||
elif self.task == "text2text-generation":
|
||||
text = generated_text[0]["generated_text"]
|
||||
elif self.task == "summarization":
|
||||
text = generated_text[0]["summary_text"]
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Got invalid task {self.task}, "
|
||||
|
||||
@@ -9,7 +9,7 @@ from langchain.llms.utils import enforce_stop_tokens
|
||||
from langchain.utils import get_from_dict_or_env
|
||||
|
||||
DEFAULT_REPO_ID = "gpt2"
|
||||
VALID_TASKS = ("text2text-generation", "text-generation", "summarization")
|
||||
VALID_TASKS = ("text2text-generation", "text-generation")
|
||||
|
||||
|
||||
class HuggingFaceHub(LLM):
|
||||
@@ -19,7 +19,7 @@ class HuggingFaceHub(LLM):
|
||||
environment variable ``HUGGINGFACEHUB_API_TOKEN`` set with your API token, or pass
|
||||
it as a named parameter to the constructor.
|
||||
|
||||
Only supports `text-generation`, `text2text-generation` and `summarization` for now.
|
||||
Only supports `text-generation` and `text2text-generation` for now.
|
||||
|
||||
Example:
|
||||
.. code-block:: python
|
||||
@@ -32,8 +32,7 @@ class HuggingFaceHub(LLM):
|
||||
repo_id: str = DEFAULT_REPO_ID
|
||||
"""Model name to use."""
|
||||
task: Optional[str] = None
|
||||
"""Task to call the model with.
|
||||
Should be a task that returns `generated_text` or `summary_text`."""
|
||||
"""Task to call the model with. Should be a task that returns `generated_text`."""
|
||||
model_kwargs: Optional[dict] = None
|
||||
"""Key word arguments to pass to the model."""
|
||||
|
||||
@@ -115,8 +114,6 @@ class HuggingFaceHub(LLM):
|
||||
text = response[0]["generated_text"][len(prompt) :]
|
||||
elif self.client.task == "text2text-generation":
|
||||
text = response[0]["generated_text"]
|
||||
elif self.client.task == "summarization":
|
||||
text = response[0]["summary_text"]
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Got invalid task {self.client.task}, "
|
||||
|
||||
@@ -11,7 +11,7 @@ from langchain.llms.utils import enforce_stop_tokens
|
||||
|
||||
DEFAULT_MODEL_ID = "gpt2"
|
||||
DEFAULT_TASK = "text-generation"
|
||||
VALID_TASKS = ("text2text-generation", "text-generation", "summarization")
|
||||
VALID_TASKS = ("text2text-generation", "text-generation")
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -21,7 +21,7 @@ class HuggingFacePipeline(LLM):
|
||||
|
||||
To use, you should have the ``transformers`` python package installed.
|
||||
|
||||
Only supports `text-generation`, `text2text-generation` and `summarization` for now.
|
||||
Only supports `text-generation` and `text2text-generation` for now.
|
||||
|
||||
Example using from_model_id:
|
||||
.. code-block:: python
|
||||
@@ -86,7 +86,7 @@ class HuggingFacePipeline(LLM):
|
||||
try:
|
||||
if task == "text-generation":
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)
|
||||
elif task in ("text2text-generation", "summarization"):
|
||||
elif task == "text2text-generation":
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs)
|
||||
else:
|
||||
raise ValueError(
|
||||
@@ -162,8 +162,6 @@ class HuggingFacePipeline(LLM):
|
||||
text = response[0]["generated_text"][len(prompt) :]
|
||||
elif self.pipeline.task == "text2text-generation":
|
||||
text = response[0]["generated_text"]
|
||||
elif self.pipeline.task == "summarization":
|
||||
text = response[0]["summary_text"]
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Got invalid task {self.pipeline.task}, "
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
"""Wrapper around Huggingface text generation inference API."""
|
||||
from functools import partial
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from pydantic import Extra, Field, root_validator
|
||||
@@ -37,7 +36,6 @@ class HuggingFaceTextGenInference(LLM):
|
||||
Example:
|
||||
.. code-block:: python
|
||||
|
||||
# Basic Example (no streaming)
|
||||
llm = HuggingFaceTextGenInference(
|
||||
inference_server_url = "http://localhost:8010/",
|
||||
max_new_tokens = 512,
|
||||
@@ -47,25 +45,6 @@ class HuggingFaceTextGenInference(LLM):
|
||||
temperature = 0.01,
|
||||
repetition_penalty = 1.03,
|
||||
)
|
||||
print(llm("What is Deep Learning?"))
|
||||
|
||||
# Streaming response example
|
||||
from langchain.callbacks import streaming_stdout
|
||||
|
||||
callbacks = [streaming_stdout.StreamingStdOutCallbackHandler()]
|
||||
llm = HuggingFaceTextGenInference(
|
||||
inference_server_url = "http://localhost:8010/",
|
||||
max_new_tokens = 512,
|
||||
top_k = 10,
|
||||
top_p = 0.95,
|
||||
typical_p = 0.95,
|
||||
temperature = 0.01,
|
||||
repetition_penalty = 1.03,
|
||||
callbacks = callbacks,
|
||||
stream = True
|
||||
)
|
||||
print(llm("What is Deep Learning?"))
|
||||
|
||||
"""
|
||||
|
||||
max_new_tokens: int = 512
|
||||
@@ -78,7 +57,6 @@ class HuggingFaceTextGenInference(LLM):
|
||||
seed: Optional[int] = None
|
||||
inference_server_url: str = ""
|
||||
timeout: int = 120
|
||||
stream: bool = False
|
||||
client: Any
|
||||
|
||||
class Config:
|
||||
@@ -119,52 +97,22 @@ class HuggingFaceTextGenInference(LLM):
|
||||
else:
|
||||
stop += self.stop_sequences
|
||||
|
||||
if not self.stream:
|
||||
res = self.client.generate(
|
||||
prompt,
|
||||
stop_sequences=stop,
|
||||
max_new_tokens=self.max_new_tokens,
|
||||
top_k=self.top_k,
|
||||
top_p=self.top_p,
|
||||
typical_p=self.typical_p,
|
||||
temperature=self.temperature,
|
||||
repetition_penalty=self.repetition_penalty,
|
||||
seed=self.seed,
|
||||
)
|
||||
# remove stop sequences from the end of the generated text
|
||||
for stop_seq in stop:
|
||||
if stop_seq in res.generated_text:
|
||||
res.generated_text = res.generated_text[
|
||||
: res.generated_text.index(stop_seq)
|
||||
]
|
||||
text = res.generated_text
|
||||
else:
|
||||
text_callback = None
|
||||
if run_manager:
|
||||
text_callback = partial(
|
||||
run_manager.on_llm_new_token, verbose=self.verbose
|
||||
)
|
||||
params = {
|
||||
"stop_sequences": stop,
|
||||
"max_new_tokens": self.max_new_tokens,
|
||||
"top_k": self.top_k,
|
||||
"top_p": self.top_p,
|
||||
"typical_p": self.typical_p,
|
||||
"temperature": self.temperature,
|
||||
"repetition_penalty": self.repetition_penalty,
|
||||
"seed": self.seed,
|
||||
}
|
||||
text = ""
|
||||
for res in self.client.generate_stream(prompt, **params):
|
||||
token = res.token
|
||||
is_stop = False
|
||||
for stop_seq in stop:
|
||||
if stop_seq in token.text:
|
||||
is_stop = True
|
||||
break
|
||||
if is_stop:
|
||||
break
|
||||
if not token.special:
|
||||
if text_callback:
|
||||
text_callback(token.text)
|
||||
return text
|
||||
res = self.client.generate(
|
||||
prompt,
|
||||
stop_sequences=stop,
|
||||
max_new_tokens=self.max_new_tokens,
|
||||
top_k=self.top_k,
|
||||
top_p=self.top_p,
|
||||
typical_p=self.typical_p,
|
||||
temperature=self.temperature,
|
||||
repetition_penalty=self.repetition_penalty,
|
||||
seed=self.seed,
|
||||
)
|
||||
# remove stop sequences from the end of the generated text
|
||||
for stop_seq in stop:
|
||||
if stop_seq in res.generated_text:
|
||||
res.generated_text = res.generated_text[
|
||||
: res.generated_text.index(stop_seq)
|
||||
]
|
||||
|
||||
return res.generated_text
|
||||
|
||||
@@ -64,9 +64,6 @@ class LlamaCpp(LLM):
|
||||
"""Number of tokens to process in parallel.
|
||||
Should be a number between 1 and n_ctx."""
|
||||
|
||||
n_gpu_layers: Optional[int] = Field(None, alias="n_gpu_layers")
|
||||
"""Number of layers to be loaded into gpu memory. Default None."""
|
||||
|
||||
suffix: Optional[str] = Field(None)
|
||||
"""A suffix to append to the generated text. If None, no suffix is appended."""
|
||||
|
||||
@@ -107,41 +104,47 @@ class LlamaCpp(LLM):
|
||||
def validate_environment(cls, values: Dict) -> Dict:
|
||||
"""Validate that llama-cpp-python library is installed."""
|
||||
model_path = values["model_path"]
|
||||
model_param_names = [
|
||||
"lora_path",
|
||||
"lora_base",
|
||||
"n_ctx",
|
||||
"n_parts",
|
||||
"seed",
|
||||
"f16_kv",
|
||||
"logits_all",
|
||||
"vocab_only",
|
||||
"use_mlock",
|
||||
"n_threads",
|
||||
"n_batch",
|
||||
"use_mmap",
|
||||
"last_n_tokens_size",
|
||||
]
|
||||
model_params = {k: values[k] for k in model_param_names}
|
||||
# For backwards compatibility, only include if non-null.
|
||||
if values["n_gpu_layers"] is not None:
|
||||
model_params["n_gpu_layers"] = values["n_gpu_layers"]
|
||||
lora_path = values["lora_path"]
|
||||
lora_base = values["lora_base"]
|
||||
n_ctx = values["n_ctx"]
|
||||
n_parts = values["n_parts"]
|
||||
seed = values["seed"]
|
||||
f16_kv = values["f16_kv"]
|
||||
logits_all = values["logits_all"]
|
||||
vocab_only = values["vocab_only"]
|
||||
use_mlock = values["use_mlock"]
|
||||
n_threads = values["n_threads"]
|
||||
n_batch = values["n_batch"]
|
||||
use_mmap = values["use_mmap"]
|
||||
last_n_tokens_size = values["last_n_tokens_size"]
|
||||
|
||||
try:
|
||||
from llama_cpp import Llama
|
||||
|
||||
values["client"] = Llama(model_path, **model_params)
|
||||
values["client"] = Llama(
|
||||
model_path=model_path,
|
||||
lora_base=lora_base,
|
||||
lora_path=lora_path,
|
||||
n_ctx=n_ctx,
|
||||
n_parts=n_parts,
|
||||
seed=seed,
|
||||
f16_kv=f16_kv,
|
||||
logits_all=logits_all,
|
||||
vocab_only=vocab_only,
|
||||
use_mlock=use_mlock,
|
||||
n_threads=n_threads,
|
||||
n_batch=n_batch,
|
||||
use_mmap=use_mmap,
|
||||
last_n_tokens_size=last_n_tokens_size,
|
||||
)
|
||||
except ImportError:
|
||||
raise ModuleNotFoundError(
|
||||
"Could not import llama-cpp-python library. "
|
||||
"Please install the llama-cpp-python library to "
|
||||
"use this embedding model: pip install llama-cpp-python"
|
||||
)
|
||||
except Exception as e:
|
||||
raise ValueError(
|
||||
f"Could not load Llama model from path: {model_path}. "
|
||||
f"Received error {e}"
|
||||
)
|
||||
except Exception:
|
||||
raise NameError(f"Could not load Llama model from path: {model_path}")
|
||||
|
||||
return values
|
||||
|
||||
|
||||
@@ -11,7 +11,7 @@ from langchain.llms.utils import enforce_stop_tokens
|
||||
|
||||
DEFAULT_MODEL_ID = "gpt2"
|
||||
DEFAULT_TASK = "text-generation"
|
||||
VALID_TASKS = ("text2text-generation", "text-generation", "summarization")
|
||||
VALID_TASKS = ("text2text-generation", "text-generation")
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -35,8 +35,6 @@ def _generate_text(
|
||||
text = response[0]["generated_text"][len(prompt) :]
|
||||
elif pipeline.task == "text2text-generation":
|
||||
text = response[0]["generated_text"]
|
||||
elif pipeline.task == "summarization":
|
||||
text = response[0]["summary_text"]
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Got invalid task {pipeline.task}, "
|
||||
@@ -66,7 +64,7 @@ def _load_transformer(
|
||||
try:
|
||||
if task == "text-generation":
|
||||
model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)
|
||||
elif task in ("text2text-generation", "summarization"):
|
||||
elif task == "text2text-generation":
|
||||
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs)
|
||||
else:
|
||||
raise ValueError(
|
||||
@@ -121,7 +119,7 @@ class SelfHostedHuggingFaceLLM(SelfHostedPipeline):
|
||||
|
||||
To use, you should have the ``runhouse`` python package installed.
|
||||
|
||||
Only supports `text-generation`, `text2text-generation` and `summarization` for now.
|
||||
Only supports `text-generation` and `text2text-generation` for now.
|
||||
|
||||
Example using from_model_id:
|
||||
.. code-block:: python
|
||||
@@ -155,8 +153,7 @@ class SelfHostedHuggingFaceLLM(SelfHostedPipeline):
|
||||
model_id: str = DEFAULT_MODEL_ID
|
||||
"""Hugging Face model_id to load the model."""
|
||||
task: str = DEFAULT_TASK
|
||||
"""Hugging Face task ("text-generation", "text2text-generation" or
|
||||
"summarization")."""
|
||||
"""Hugging Face task (either "text-generation" or "text2text-generation")."""
|
||||
device: int = 0
|
||||
"""Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc."""
|
||||
model_kwargs: Optional[dict] = None
|
||||
|
||||
@@ -3,9 +3,6 @@ from langchain.memory.buffer import (
|
||||
ConversationStringBufferMemory,
|
||||
)
|
||||
from langchain.memory.buffer_window import ConversationBufferWindowMemory
|
||||
from langchain.memory.chat_message_histories.cassandra import (
|
||||
CassandraChatMessageHistory,
|
||||
)
|
||||
from langchain.memory.chat_message_histories.cosmos_db import CosmosDBChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.dynamodb import DynamoDBChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.file import FileChatMessageHistory
|
||||
@@ -49,5 +46,4 @@ __all__ = [
|
||||
"CosmosDBChatMessageHistory",
|
||||
"FileChatMessageHistory",
|
||||
"MongoDBChatMessageHistory",
|
||||
"CassandraChatMessageHistory",
|
||||
]
|
||||
|
||||
@@ -1,6 +1,3 @@
|
||||
from langchain.memory.chat_message_histories.cassandra import (
|
||||
CassandraChatMessageHistory,
|
||||
)
|
||||
from langchain.memory.chat_message_histories.cosmos_db import CosmosDBChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.dynamodb import DynamoDBChatMessageHistory
|
||||
from langchain.memory.chat_message_histories.file import FileChatMessageHistory
|
||||
@@ -21,5 +18,4 @@ __all__ = [
|
||||
"CosmosDBChatMessageHistory",
|
||||
"FirestoreChatMessageHistory",
|
||||
"MongoDBChatMessageHistory",
|
||||
"CassandraChatMessageHistory",
|
||||
]
|
||||
|
||||
@@ -1,186 +0,0 @@
|
||||
import json
|
||||
import logging
|
||||
from typing import List
|
||||
|
||||
from langchain.schema import (
|
||||
AIMessage,
|
||||
BaseChatMessageHistory,
|
||||
BaseMessage,
|
||||
HumanMessage,
|
||||
_message_to_dict,
|
||||
messages_from_dict,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
DEFAULT_KEYSPACE_NAME = "chat_history"
|
||||
DEFAULT_TABLE_NAME = "message_store"
|
||||
DEFAULT_USERNAME = "cassandra"
|
||||
DEFAULT_PASSWORD = "cassandra"
|
||||
DEFAULT_PORT = 9042
|
||||
|
||||
|
||||
class CassandraChatMessageHistory(BaseChatMessageHistory):
|
||||
"""Chat message history that stores history in Cassandra.
|
||||
Args:
|
||||
contact_points: list of ips to connect to Cassandra cluster
|
||||
session_id: arbitrary key that is used to store the messages
|
||||
of a single chat session.
|
||||
port: port to connect to Cassandra cluster
|
||||
username: username to connect to Cassandra cluster
|
||||
password: password to connect to Cassandra cluster
|
||||
keyspace_name: name of the keyspace to use
|
||||
table_name: name of the table to use
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
contact_points: List[str],
|
||||
session_id: str,
|
||||
port: int = DEFAULT_PORT,
|
||||
username: str = DEFAULT_USERNAME,
|
||||
password: str = DEFAULT_PASSWORD,
|
||||
keyspace_name: str = DEFAULT_KEYSPACE_NAME,
|
||||
table_name: str = DEFAULT_TABLE_NAME,
|
||||
):
|
||||
self.contact_points = contact_points
|
||||
self.session_id = session_id
|
||||
self.port = port
|
||||
self.username = username
|
||||
self.password = password
|
||||
self.keyspace_name = keyspace_name
|
||||
self.table_name = table_name
|
||||
|
||||
try:
|
||||
from cassandra import (
|
||||
AuthenticationFailed,
|
||||
OperationTimedOut,
|
||||
UnresolvableContactPoints,
|
||||
)
|
||||
from cassandra.cluster import Cluster, PlainTextAuthProvider
|
||||
except ImportError:
|
||||
raise ValueError(
|
||||
"Could not import cassandra-driver python package. "
|
||||
"Please install it with `pip install cassandra-driver`."
|
||||
)
|
||||
|
||||
self.cluster: Cluster = Cluster(
|
||||
contact_points,
|
||||
port=port,
|
||||
auth_provider=PlainTextAuthProvider(
|
||||
username=self.username, password=self.password
|
||||
),
|
||||
)
|
||||
|
||||
try:
|
||||
self.session = self.cluster.connect()
|
||||
except (
|
||||
AuthenticationFailed,
|
||||
UnresolvableContactPoints,
|
||||
OperationTimedOut,
|
||||
) as error:
|
||||
logger.error(
|
||||
"Unable to establish connection with \
|
||||
cassandra chat message history database"
|
||||
)
|
||||
raise error
|
||||
|
||||
self._prepare_cassandra()
|
||||
|
||||
def _prepare_cassandra(self) -> None:
|
||||
"""Create the keyspace and table if they don't exist yet"""
|
||||
|
||||
from cassandra import OperationTimedOut, Unavailable
|
||||
|
||||
try:
|
||||
self.session.execute(
|
||||
f"""CREATE KEYSPACE IF NOT EXISTS
|
||||
{self.keyspace_name} WITH REPLICATION =
|
||||
{{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }};"""
|
||||
)
|
||||
except (OperationTimedOut, Unavailable) as error:
|
||||
logger.error(
|
||||
f"Unable to create cassandra \
|
||||
chat message history keyspace: {self.keyspace_name}."
|
||||
)
|
||||
raise error
|
||||
|
||||
self.session.set_keyspace(self.keyspace_name)
|
||||
|
||||
try:
|
||||
self.session.execute(
|
||||
f"""CREATE TABLE IF NOT EXISTS
|
||||
{self.table_name} (id UUID, session_id varchar,
|
||||
history text, PRIMARY KEY ((session_id), id) );"""
|
||||
)
|
||||
except (OperationTimedOut, Unavailable) as error:
|
||||
logger.error(
|
||||
f"Unable to create cassandra \
|
||||
chat message history table: {self.table_name}"
|
||||
)
|
||||
raise error
|
||||
|
||||
@property
|
||||
def messages(self) -> List[BaseMessage]: # type: ignore
|
||||
"""Retrieve the messages from Cassandra"""
|
||||
from cassandra import ReadFailure, ReadTimeout, Unavailable
|
||||
|
||||
try:
|
||||
rows = self.session.execute(
|
||||
f"""SELECT * FROM {self.table_name}
|
||||
WHERE session_id = '{self.session_id}' ;"""
|
||||
)
|
||||
except (Unavailable, ReadTimeout, ReadFailure) as error:
|
||||
logger.error("Unable to Retreive chat history messages from cassadra")
|
||||
raise error
|
||||
|
||||
if rows:
|
||||
items = [json.loads(row.history) for row in rows]
|
||||
else:
|
||||
items = []
|
||||
|
||||
messages = messages_from_dict(items)
|
||||
|
||||
return messages
|
||||
|
||||
def add_user_message(self, message: str) -> None:
|
||||
self.append(HumanMessage(content=message))
|
||||
|
||||
def add_ai_message(self, message: str) -> None:
|
||||
self.append(AIMessage(content=message))
|
||||
|
||||
def append(self, message: BaseMessage) -> None:
|
||||
"""Append the message to the record in Cassandra"""
|
||||
|
||||
import uuid
|
||||
|
||||
from cassandra import Unavailable, WriteFailure, WriteTimeout
|
||||
|
||||
try:
|
||||
self.session.execute(
|
||||
"""INSERT INTO message_store
|
||||
(id, session_id, history) VALUES (%s, %s, %s);""",
|
||||
(uuid.uuid4(), self.session_id, json.dumps(_message_to_dict(message))),
|
||||
)
|
||||
except (Unavailable, WriteTimeout, WriteFailure) as error:
|
||||
logger.error("Unable to write chat history messages to cassandra")
|
||||
raise error
|
||||
|
||||
def clear(self) -> None:
|
||||
"""Clear session memory from Cassandra"""
|
||||
|
||||
from cassandra import OperationTimedOut, Unavailable
|
||||
|
||||
try:
|
||||
self.session.execute(
|
||||
f"DELETE FROM {self.table_name} WHERE session_id = '{self.session_id}';"
|
||||
)
|
||||
except (Unavailable, OperationTimedOut) as error:
|
||||
logger.error("Unable to clear chat history messages from cassandra")
|
||||
raise error
|
||||
|
||||
def __del__(self) -> None:
|
||||
if self.session:
|
||||
self.session.shutdown()
|
||||
if self.cluster:
|
||||
self.cluster.shutdown()
|
||||
@@ -74,16 +74,6 @@ class BaseStringMessagePromptTemplate(BaseMessagePromptTemplate, ABC):
|
||||
prompt = PromptTemplate.from_template(template)
|
||||
return cls(prompt=prompt, **kwargs)
|
||||
|
||||
@classmethod
|
||||
def from_template_file(
|
||||
cls: Type[MessagePromptTemplateT],
|
||||
template_file: Union[str, Path],
|
||||
input_variables: List[str],
|
||||
**kwargs: Any,
|
||||
) -> MessagePromptTemplateT:
|
||||
prompt = PromptTemplate.from_file(template_file, input_variables)
|
||||
return cls(prompt=prompt, **kwargs)
|
||||
|
||||
@abstractmethod
|
||||
def format(self, **kwargs: Any) -> BaseMessage:
|
||||
"""To a BaseMessage."""
|
||||
|
||||
@@ -1,43 +0,0 @@
|
||||
"""Milvus Retriever"""
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from langchain.embeddings.base import Embeddings
|
||||
from langchain.schema import BaseRetriever, Document
|
||||
from langchain.vectorstores.milvus import Milvus
|
||||
|
||||
# TODO: Update to MilvusClient + Hybrid Search when available
|
||||
|
||||
|
||||
class MilvusRetreiver(BaseRetriever):
|
||||
def __init__(
|
||||
self,
|
||||
embedding_function: Embeddings,
|
||||
collection_name: str = "LangChainCollection",
|
||||
connection_args: Optional[Dict[str, Any]] = None,
|
||||
consistency_level: str = "Session",
|
||||
search_params: Optional[dict] = None,
|
||||
):
|
||||
self.store = Milvus(
|
||||
embedding_function,
|
||||
collection_name,
|
||||
connection_args,
|
||||
consistency_level,
|
||||
)
|
||||
self.retriever = self.store.as_retriever(search_kwargs={"param": search_params})
|
||||
|
||||
def add_texts(
|
||||
self, texts: List[str], metadatas: Optional[List[dict]] = None
|
||||
) -> None:
|
||||
"""Add text to the Milvus store
|
||||
|
||||
Args:
|
||||
texts (List[str]): The text
|
||||
metadatas (List[dict]): Metadata dicts, must line up with existing store
|
||||
"""
|
||||
self.store.add_texts(texts, metadatas)
|
||||
|
||||
def get_relevant_documents(self, query: str) -> List[Document]:
|
||||
return self.retriever.get_relevant_documents(query)
|
||||
|
||||
async def aget_relevant_documents(self, query: str) -> List[Document]:
|
||||
raise NotImplementedError
|
||||
@@ -68,7 +68,7 @@ class SelfQueryRetriever(BaseRetriever, BaseModel):
|
||||
Returns:
|
||||
List of relevant documents
|
||||
"""
|
||||
inputs = self.llm_chain.prep_inputs({"query": query})
|
||||
inputs = self.llm_chain.prep_inputs(query)
|
||||
structured_query = cast(
|
||||
StructuredQuery, self.llm_chain.predict_and_parse(callbacks=None, **inputs)
|
||||
)
|
||||
@@ -77,11 +77,8 @@ class SelfQueryRetriever(BaseRetriever, BaseModel):
|
||||
new_query, new_kwargs = self.structured_query_translator.visit_structured_query(
|
||||
structured_query
|
||||
)
|
||||
if structured_query.limit is not None:
|
||||
new_kwargs["k"] = structured_query.limit
|
||||
|
||||
search_kwargs = {**self.search_kwargs, **new_kwargs}
|
||||
docs = self.vectorstore.search(new_query, self.search_type, **search_kwargs)
|
||||
docs = self.vectorstore.search(query, self.search_type, **search_kwargs)
|
||||
return docs
|
||||
|
||||
async def aget_relevant_documents(self, query: str) -> List[Document]:
|
||||
@@ -96,13 +93,11 @@ class SelfQueryRetriever(BaseRetriever, BaseModel):
|
||||
metadata_field_info: List[AttributeInfo],
|
||||
structured_query_translator: Optional[Visitor] = None,
|
||||
chain_kwargs: Optional[Dict] = None,
|
||||
enable_limit: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> "SelfQueryRetriever":
|
||||
if structured_query_translator is None:
|
||||
structured_query_translator = _get_builtin_translator(vectorstore.__class__)
|
||||
chain_kwargs = chain_kwargs or {}
|
||||
|
||||
if "allowed_comparators" not in chain_kwargs:
|
||||
chain_kwargs[
|
||||
"allowed_comparators"
|
||||
@@ -112,11 +107,7 @@ class SelfQueryRetriever(BaseRetriever, BaseModel):
|
||||
"allowed_operators"
|
||||
] = structured_query_translator.allowed_operators
|
||||
llm_chain = load_query_constructor_chain(
|
||||
llm,
|
||||
document_contents,
|
||||
metadata_field_info,
|
||||
enable_limit=enable_limit,
|
||||
**chain_kwargs,
|
||||
llm, document_contents, metadata_field_info, **chain_kwargs
|
||||
)
|
||||
return cls(
|
||||
llm_chain=llm_chain,
|
||||
|
||||
@@ -47,7 +47,7 @@ class WeaviateHybridSearchRetriever(BaseRetriever):
|
||||
arbitrary_types_allowed = True
|
||||
|
||||
# added text_key
|
||||
def add_documents(self, docs: List[Document], **kwargs: Any) -> List[str]:
|
||||
def add_documents(self, docs: List[Document]) -> List[str]:
|
||||
"""Upload documents to Weaviate."""
|
||||
from weaviate.util import get_valid_uuid
|
||||
|
||||
@@ -56,14 +56,7 @@ class WeaviateHybridSearchRetriever(BaseRetriever):
|
||||
for i, doc in enumerate(docs):
|
||||
metadata = doc.metadata or {}
|
||||
data_properties = {self._text_key: doc.page_content, **metadata}
|
||||
|
||||
# If the UUID of one of the objects already exists
|
||||
# then the existing objectwill be replaced by the new object.
|
||||
if "uuids" in kwargs:
|
||||
_id = kwargs["uuids"][i]
|
||||
else:
|
||||
_id = get_valid_uuid(uuid4())
|
||||
|
||||
_id = get_valid_uuid(uuid4())
|
||||
batch.add_data_object(data_properties, self._index_name, _id)
|
||||
ids.append(_id)
|
||||
return ids
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user