langchain: adds recursive json splitter (#17144)

- **Description:** This adds a recursive json splitter class to the
existing text_splitters as well as unit tests
- **Issue:** splitting text from structured data can cause issues if you
have a large nested json object and you split it as regular text you may
end up losing the structure of the json. To mitigate against this you
can split the nested json into large chunks and overlap them, but this
causes unnecessary text processing and there will still be times where
the nested json is so big that the chunks get separated from the parent
keys.

As an example you wouldn't want the following to be split in half:
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
-----------------------------------------------------------------------split-----
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf',
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
Any llm processing the second chunk of text may not have the context of
val1, and val16 reducing accuracy. Embeddings will also lack this
context and this makes retrieval less accurate.

Instead you want it to be split into chunks that retain the json
structure.
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf'}}}
```
and
```shell
{'val1':{'val16':{
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
This recursive json text splitter does this. Values that contain a list
can be converted to dict first by using split(... convert_lists=True)
otherwise long lists will not be split and you may end up with chunks
larger than the max chunk.

In my testing large json objects could be split into small chunks with 
   Increased question answering accuracy
 The ability to split into smaller chunks meant retrieval queries can
use fewer tokens


- **Dependencies:** json import added to text_splitter.py, and random
added to the unit test
  - **Twitter handle:** @joelsprunger

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
This commit is contained in:
joelsprunger
2024-02-08 13:45:34 -08:00
committed by GitHub
parent f0ada1a396
commit 3984f6604f
3 changed files with 388 additions and 1 deletions

View File

@@ -0,0 +1,225 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a678d550",
"metadata": {},
"source": [
"# Recursively split JSON\n",
"\n",
"This json splitter traverses json data depth first and builds smaller json chunks. It attempts to keep nested json objects whole but will split them if needed to keep chunks between a min_chunk_size and the max_chunk_size. If the value is not a nested json, but rather a very large string the string will not be split. If you need a hard cap on the chunk size considder following this with a Recursive Text splitter on those chunks. There is an optional pre-processing step to split lists, by first converting them to json (dict) and then splitting them as such.\n",
"\n",
"1. How the text is split: json value.\n",
"2. How the chunk size is measured: by number of characters."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a504e1e7",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"import requests"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3390ae1d",
"metadata": {},
"outputs": [],
"source": [
"# This is a large nested json object and will be loaded as a python dict\n",
"json_data = requests.get(\"https://api.smith.langchain.com/openapi.json\").json()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "7bfe2c1e",
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import RecursiveJsonSplitter"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2833c409",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"splitter = RecursiveJsonSplitter(max_chunk_size=300)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f941aa56",
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"# Recursively split json data - If you need to access/manipulate the smaller json chunks\n",
"json_chunks = splitter.split_json(json_data=json_data)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0839f4f0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\"openapi\": \"3.0.2\", \"info\": {\"title\": \"LangChainPlus\", \"version\": \"0.1.0\"}, \"paths\": {\"/sessions/{session_id}\": {\"get\": {\"tags\": [\"tracer-sessions\"], \"summary\": \"Read Tracer Session\", \"description\": \"Get a specific session.\", \"operationId\": \"read_tracer_session_sessions__session_id__get\"}}}}\n",
"{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}\n"
]
}
],
"source": [
"# The splitter can also output documents\n",
"docs = splitter.create_documents(texts=[json_data])\n",
"\n",
"# or a list of strings\n",
"texts = splitter.split_text(json_data=json_data)\n",
"\n",
"print(texts[0])\n",
"print(texts[1])"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c34b1f7f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[293, 431, 203, 277, 230, 194, 162, 280, 223, 193]\n",
"{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}\n"
]
}
],
"source": [
"# Let's look at the size of the chunks\n",
"print([len(text) for text in texts][:10])\n",
"\n",
"# Reviewing one of these chunks that was bigger we see there is a list object there\n",
"print(texts[1])"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "992477c2",
"metadata": {},
"outputs": [],
"source": [
"# The json splitter by default does not split lists\n",
"# the following will preprocess the json and convert list to dict with index:item as key:val pairs\n",
"texts = splitter.split_text(json_data=json_data, convert_lists=True)"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "2d23b3aa",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[293, 431, 203, 277, 230, 194, 162, 280, 223, 193]\n"
]
}
],
"source": [
"# Let's look at the size of the chunks. Now they are all under the max\n",
"print([len(text) for text in texts][:10])"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "d2c2773e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}\n"
]
}
],
"source": [
"# The list has been converted to a dict, but retains all the needed contextual information even if split into many chunks\n",
"print(texts[1])"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "8963b01a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(page_content='{\"paths\": {\"/sessions/{session_id}\": {\"get\": {\"parameters\": [{\"required\": true, \"schema\": {\"title\": \"Session Id\", \"type\": \"string\", \"format\": \"uuid\"}, \"name\": \"session_id\", \"in\": \"path\"}, {\"required\": false, \"schema\": {\"title\": \"Include Stats\", \"type\": \"boolean\", \"default\": false}, \"name\": \"include_stats\", \"in\": \"query\"}, {\"required\": false, \"schema\": {\"title\": \"Accept\", \"type\": \"string\"}, \"name\": \"accept\", \"in\": \"header\"}]}}}}')"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# We can also look at the documents\n",
"docs[1]"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "168da4f0",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}