Files
langchain/docs/docs/how_to/HTML_header_metadata_splitter.ipynb
Erick Friis 21d14549a9 docs: v0.2 docs in master (#21438)
current python.langchain.com is building from branch `v0.1`. Iterate on
v0.2 docs here.

---------

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
Co-authored-by: Leonid Ganeline <leo.gan.57@gmail.com>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
Co-authored-by: Averi Kitsch <akitsch@google.com>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Martín Gotelli Ferenaz <martingotelliferenaz@gmail.com>
Co-authored-by: Fayfox <admin@fayfox.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Dawson Bauer <105886620+djbauer2@users.noreply.github.com>
Co-authored-by: Ravindu Somawansa <ravindu.somawansa@gmail.com>
Co-authored-by: Dhruv Chawla <43818888+Dominastorm@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: WeichenXu <weichen.xu@databricks.com>
Co-authored-by: Benito Geordie <89472452+benitoThree@users.noreply.github.com>
Co-authored-by: kartikTAI <129414343+kartikTAI@users.noreply.github.com>
Co-authored-by: Kartik Sarangmath <kartik@thirdai.com>
Co-authored-by: Sevin F. Varoglu <sfvaroglu@octoml.ai>
Co-authored-by: MacanPN <martin.triska@gmail.com>
Co-authored-by: Prashanth Rao <35005448+prrao87@users.noreply.github.com>
Co-authored-by: Hyeongchan Kim <kozistr@gmail.com>
Co-authored-by: sdan <git@sdan.io>
Co-authored-by: Guangdong Liu <liugddx@gmail.com>
Co-authored-by: Rahul Triptahi <rahul.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: pjb157 <84070455+pjb157@users.noreply.github.com>
Co-authored-by: Eun Hye Kim <ehkim1440@gmail.com>
Co-authored-by: kaijietti <43436010+kaijietti@users.noreply.github.com>
Co-authored-by: Pengcheng Liu <pcliu.fd@gmail.com>
Co-authored-by: Tomer Cagan <tomer@tomercagan.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
2024-05-08 12:29:59 -07:00

360 lines
14 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
{
"cells": [
{
"cell_type": "markdown",
"id": "c95fcd15cd52c944",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"# How to split by HTML header \n",
"## Description and motivation\n",
"\n",
"[HTMLHeaderTextSplitter](https://api.python.langchain.com/en/latest/html/langchain_text_splitters.html.HTMLHeaderTextSplitter.html) is a \"structure-aware\" chunker that splits text at the HTML element level and adds metadata for each header \"relevant\" to any given chunk. It can return chunks element by element or combine elements with the same metadata, with the objectives of (a) keeping related text grouped (more or less) semantically and (b) preserving context-rich information encoded in document structures. It can be used with other text splitters as part of a chunking pipeline.\n",
"\n",
"It is analogous to the [MarkdownHeaderTextSplitter](/docs/how_to/markdown_header_metadata_splitter) for markdown files.\n",
"\n",
"To specify what headers to split on, specify `headers_to_split_on` when instantiating `HTMLHeaderTextSplitter` as shown below.\n",
"\n",
"## Usage examples\n",
"### 1) How to split HTML strings:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2e55d44c-1fff-449a-bf52-0d6df488323f",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-text-splitters"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "initial_id",
"metadata": {
"ExecuteTime": {
"end_time": "2023-10-02T18:57:49.208965400Z",
"start_time": "2023-10-02T18:57:48.899756Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='Foo'),\n",
" Document(page_content='Some intro text about Foo. \\nBar main section Bar subsection 1 Bar subsection 2', metadata={'Header 1': 'Foo'}),\n",
" Document(page_content='Some intro text about Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section'}),\n",
" Document(page_content='Some text about the first subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 1'}),\n",
" Document(page_content='Some text about the second subtopic of Bar.', metadata={'Header 1': 'Foo', 'Header 2': 'Bar main section', 'Header 3': 'Bar subsection 2'}),\n",
" Document(page_content='Baz', metadata={'Header 1': 'Foo'}),\n",
" Document(page_content='Some text about Baz', metadata={'Header 1': 'Foo', 'Header 2': 'Baz'}),\n",
" Document(page_content='Some concluding text about Foo', metadata={'Header 1': 'Foo'})]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_text_splitters import HTMLHeaderTextSplitter\n",
"\n",
"html_string = \"\"\"\n",
"<!DOCTYPE html>\n",
"<html>\n",
"<body>\n",
" <div>\n",
" <h1>Foo</h1>\n",
" <p>Some intro text about Foo.</p>\n",
" <div>\n",
" <h2>Bar main section</h2>\n",
" <p>Some intro text about Bar.</p>\n",
" <h3>Bar subsection 1</h3>\n",
" <p>Some text about the first subtopic of Bar.</p>\n",
" <h3>Bar subsection 2</h3>\n",
" <p>Some text about the second subtopic of Bar.</p>\n",
" </div>\n",
" <div>\n",
" <h2>Baz</h2>\n",
" <p>Some text about Baz</p>\n",
" </div>\n",
" <br>\n",
" <p>Some concluding text about Foo</p>\n",
" </div>\n",
"</body>\n",
"</html>\n",
"\"\"\"\n",
"\n",
"headers_to_split_on = [\n",
" (\"h1\", \"Header 1\"),\n",
" (\"h2\", \"Header 2\"),\n",
" (\"h3\", \"Header 3\"),\n",
"]\n",
"\n",
"html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)\n",
"html_header_splits = html_splitter.split_text(html_string)\n",
"html_header_splits"
]
},
{
"cell_type": "markdown",
"id": "7126f179-f4d0-4b5d-8bef-44e83b59262c",
"metadata": {},
"source": [
"To return each element together with their associated headers, specify `return_each_element=True` when instantiating `HTMLHeaderTextSplitter`:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "90c23088-804c-4c89-bd09-b820587ceeef",
"metadata": {},
"outputs": [],
"source": [
"html_splitter = HTMLHeaderTextSplitter(\n",
" headers_to_split_on,\n",
" return_each_element=True,\n",
")\n",
"html_header_splits_elements = html_splitter.split_text(html_string)"
]
},
{
"cell_type": "markdown",
"id": "b776c54e-9159-4d88-9d6c-3a1d0b639dfe",
"metadata": {},
"source": [
"Comparing with the above, where elements are aggregated by their headers:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "711abc74-a7b0-4dc5-a4bb-af3cafe4e0f4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='Foo'\n",
"page_content='Some intro text about Foo. \\nBar main section Bar subsection 1 Bar subsection 2' metadata={'Header 1': 'Foo'}\n"
]
}
],
"source": [
"for element in html_header_splits[:2]:\n",
" print(element)"
]
},
{
"cell_type": "markdown",
"id": "fe5528db-187c-418a-9480-fc0267645d42",
"metadata": {},
"source": [
"Now each element is returned as a distinct `Document`:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "24722d8e-d073-46a8-a821-6b722412f1be",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"page_content='Foo'\n",
"page_content='Some intro text about Foo.' metadata={'Header 1': 'Foo'}\n",
"page_content='Bar main section Bar subsection 1 Bar subsection 2' metadata={'Header 1': 'Foo'}\n"
]
}
],
"source": [
"for element in html_header_splits_elements[:3]:\n",
" print(element)"
]
},
{
"cell_type": "markdown",
"id": "e29b4aade2a0070c",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"#### 2) How to split from a URL or HTML file:\n",
"\n",
"To read directly from a URL, pass the URL string into the `split_text_from_url` method.\n",
"\n",
"Similarly, a local HTML file can be passed to the `split_text_from_file` method."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6ecb9fb2-32ff-4249-a4b4-d5e5e191f013",
"metadata": {},
"outputs": [],
"source": [
"url = \"https://plato.stanford.edu/entries/goedel/\"\n",
"\n",
"headers_to_split_on = [\n",
" (\"h1\", \"Header 1\"),\n",
" (\"h2\", \"Header 2\"),\n",
" (\"h3\", \"Header 3\"),\n",
" (\"h4\", \"Header 4\"),\n",
"]\n",
"\n",
"html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)\n",
"\n",
"# for local file use html_splitter.split_text_from_file(<path_to_file>)\n",
"html_header_splits = html_splitter.split_text_from_url(url)"
]
},
{
"cell_type": "markdown",
"id": "c6e3dd41-0c57-472a-a3d4-4e7e8ea6914f",
"metadata": {},
"source": [
"### 2) How to constrain chunk sizes:\n",
"\n",
"`HTMLHeaderTextSplitter`, which splits based on HTML headers, can be composed with another splitter which constrains splits based on character lengths, such as `RecursiveCharacterTextSplitter`.\n",
"\n",
"This can be done using the `.split_documents` method of the second splitter:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6ada8ea093ea0475",
"metadata": {
"ExecuteTime": {
"end_time": "2023-10-02T18:57:51.016141300Z",
"start_time": "2023-10-02T18:57:50.647495400Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='We see that Gödel first tried to reduce the consistency problem for analysis to that of arithmetic. This seemed to require a truth definition for arithmetic, which in turn led to paradoxes, such as the Liar paradox (“This sentence is false”) and Berrys paradox (“The least number not defined by an expression consisting of just fourteen English words”). Gödel then noticed that such paradoxes would not necessarily arise if truth were replaced by provability. But this means that arithmetic truth', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödels Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}),\n",
" Document(page_content='means that arithmetic truth and arithmetic provability are not co-extensive — whence the First Incompleteness Theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödels Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}),\n",
" Document(page_content='This account of Gödels discovery was told to Hao Wang very much after the fact; but in Gödels contemporary correspondence with Bernays and Zermelo, essentially the same description of his path to the theorems is given. (See Gödel 2003a and Gödel 2003b respectively.) From those accounts we see that the undefinability of truth in arithmetic, a result credited to Tarski, was likely obtained in some form by Gödel by 1931. But he neither publicized nor published the result; the biases logicians', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödels Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}),\n",
" Document(page_content='result; the biases logicians had expressed at the time concerning the notion of truth, biases which came vehemently to the fore when Tarski announced his results on the undefinability of truth in formal systems 1935, may have served as a deterrent to Gödels publication of that theorem.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödels Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.1 The First Incompleteness Theorem'}),\n",
" Document(page_content='We now describe the proof of the two theorems, formulating Gödels results in Peano arithmetic. Gödel himself used a system related to that defined in Principia Mathematica, but containing Peano arithmetic. In our presentation of the First and Second Incompleteness Theorems we refer to Peano arithmetic as P, following Gödels notation.', metadata={'Header 1': 'Kurt Gödel', 'Header 2': '2. Gödels Mathematical Work', 'Header 3': '2.2 The Incompleteness Theorems', 'Header 4': '2.2.2 The proof of the First Incompleteness Theorem'})]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"chunk_size = 500\n",
"chunk_overlap = 30\n",
"text_splitter = RecursiveCharacterTextSplitter(\n",
" chunk_size=chunk_size, chunk_overlap=chunk_overlap\n",
")\n",
"\n",
"# Split\n",
"splits = text_splitter.split_documents(html_header_splits)\n",
"splits[80:85]"
]
},
{
"cell_type": "markdown",
"id": "ac0930371d79554a",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Limitations\n",
"\n",
"There can be quite a bit of structural variation from one HTML document to another, and while `HTMLHeaderTextSplitter` will attempt to attach all \"relevant\" headers to any given chunk, it can sometimes miss certain headers. For example, the algorithm assumes an informational hierarchy in which headers are always at nodes \"above\" associated text, i.e. prior siblings, ancestors, and combinations thereof. In the following news article (as of the writing of this document), the document is structured such that the text of the top-level headline, while tagged \"h1\", is in a *distinct* subtree from the text elements that we'd expect it to be *\"above\"*&mdash;so we can observe that the \"h1\" element and its associated text do not show up in the chunk metadata (but, where applicable, we do see \"h2\" and its associated text): \n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "5a5ec1482171b119",
"metadata": {
"ExecuteTime": {
"end_time": "2023-10-02T19:03:25.943524300Z",
"start_time": "2023-10-02T19:03:25.691641Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"No two El Niño winters are the same, but many have temperature and precipitation trends in common. \n",
"Average conditions during an El Niño winter across the continental US. \n",
"One of the major reasons is the position of the jet stream, which often shifts south during an El Niño winter. This shift typically brings wetter and cooler weather to the South while the North becomes drier and warmer, according to NOAA. \n",
"Because the jet stream is essentially a river of air that storms flow through, they c\n"
]
}
],
"source": [
"url = \"https://www.cnn.com/2023/09/25/weather/el-nino-winter-us-climate/index.html\"\n",
"\n",
"headers_to_split_on = [\n",
" (\"h1\", \"Header 1\"),\n",
" (\"h2\", \"Header 2\"),\n",
"]\n",
"\n",
"html_splitter = HTMLHeaderTextSplitter(headers_to_split_on)\n",
"html_header_splits = html_splitter.split_text_from_url(url)\n",
"print(html_header_splits[1].page_content[:500])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}