Please see PR #27678 for context
## Overview
This pull request presents a refactor of the `HTMLHeaderTextSplitter`
class aimed at improving its maintainability and readability. The
primary enhancements include simplifying the internal structure by
consolidating multiple private helper functions into a single private
method, thereby reducing complexity and making the codebase easier to
understand and extend. Importantly, all existing functionalities and
public interfaces remain unchanged.
## PR Goals
1. **Simplify Internal Logic**:
- **Consolidation of Private Methods**: The original implementation
utilized multiple private helper functions (`_header_level`,
`_dom_depth`, `_get_elements`) to manage different aspects of HTML
parsing and document generation. This fragmentation increased cognitive
load and potential maintenance overhead.
- **Streamlined Processing**: By merging these functionalities into a
single private method (`_generate_documents`), the class now offers a
more straightforward flow, making it easier for developers to trace and
understand the processing steps. (Thanks to @eyurtsev)
2. **Enhance Readability**:
- **Clearer Method Responsibilities**: With fewer private methods, each
method now has a more focused responsibility. The primary logic resides
within `_generate_documents`, which handles both HTML traversal and
document creation in a cohesive manner.
- **Reduced Redundancy**: Eliminating redundant checks and consolidating
logic reduces the code's verbosity, making it more concise without
sacrificing clarity.
3. **Improve Maintainability**:
- **Easier Debugging and Extension**: A simplified internal structure
allows for quicker identification of issues and easier implementation of
future enhancements or feature additions.
- **Consistent Header Management**: The new implementation ensures that
headers are managed consistently within a single context, reducing the
likelihood of bugs related to header scope and hierarchy.
4. **Maintain Backward Compatibility**:
- **Unchanged Public Interface**: All public methods (`split_text`,
`split_text_from_url`, `split_text_from_file`) and their signatures
remain unchanged, ensuring that existing integrations and usage patterns
are unaffected.
- **Preserved Docstrings**: Comprehensive docstrings are retained,
providing clear documentation for users and developers alike.
## Detailed Changes
1. **Removed Redundant Private Methods**:
- **Eliminated `_header_level`, `_dom_depth`, and `_get_elements`**:
These methods were merged into the `_generate_documents` method,
centralizing the logic for HTML parsing and document generation.
2. **Consolidated Document Generation Logic**:
- **Single Private Method `_generate_documents`**: This method now
handles the entire process of parsing HTML, tracking active headers,
managing document chunks, and yielding `Document` instances. This
consolidation reduces the number of moving parts and simplifies the
overall processing flow.
3. **Simplified Header Management**:
- **Immediate Header Scope Handling**: Headers are now managed within
the traversal loop of `_generate_documents`, ensuring that headers are
added or removed from the active headers dictionary in real-time based
on their DOM depth and hierarchy.
- **Removed `chunk_dom_depth` Attribute**: The need to track chunk DOM
depth separately has been eliminated, as header scopes are now directly
managed within the traversal logic.
4. **Streamlined Chunk Finalization**:
- **Enhanced `finalize_chunk` Function**: The chunk finalization process
has been simplified to directly yield a single `Document` when needed,
without maintaining an intermediate list. This change reduces
unnecessary list operations and makes the logic more straightforward.
5. **Improved Variable Naming and Flow**:
- **Descriptive Variable Names**: Variables such as `current_chunk` and
`node_text` provide clear insights into their roles within the
processing logic.
- **Direct Header Removal Logic**: Headers that are out of scope are
removed immediately during traversal, ensuring that the active headers
dictionary remains accurate and up-to-date.
6. **Preserved Comprehensive Docstrings**:
- **Unchanged Documentation**: All existing docstrings, including
class-level and method-level documentation, remain intact. This ensures
that users and developers continue to have access to detailed usage
instructions and method explanations.
## Testing
All existing test cases from `test_html_header_text_splitter.py` have
been executed against the refactored code. The results confirm that:
- **Functionality Remains Intact**: The splitter continues to accurately
parse HTML content, respect header hierarchies, and produce the expected
`Document` objects with correct metadata.
- **Backward Compatibility is Maintained**: No changes were required in
the test cases, and all tests pass without modifications, demonstrating
that the refactor does not introduce any regressions or alter existing
behaviors.
This example remains fully operational and behaves as before, returning
a list of `Document` objects with the expected metadata and content
splits.
## Conclusion
This refactor achieves a more maintainable and readable codebase by
simplifying the internal structure of the `HTMLHeaderTextSplitter`
class. By consolidating multiple private methods into a single, cohesive
private method, the class becomes easier to understand, debug, and
extend. All existing functionalities are preserved, and comprehensive
tests confirm that the refactor maintains the expected behavior. These
changes align with LangChain’s standards for clean, maintainable, and
efficient code.
---
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
This pull request updates the `HTMLHeaderTextSplitter` by replacing the
`split_text_from_file` method's implementation. The original method used
`lxml` and XSLT for processing HTML files, which caused
`lxml.etree.xsltapplyerror maxhead` when handling large HTML documents
due to limitations in the XSLT processor. Fixes#13149
By switching to BeautifulSoup (`bs4`), we achieve:
- **Improved Performance and Reliability:** BeautifulSoup efficiently
processes large HTML files without the errors associated with `lxml` and
XSLT.
- **Simplified Dependencies:** Removes the dependency on `lxml` and
external XSLT files, relying instead on the widely used `beautifulsoup4`
library.
- **Maintained Functionality:** The new method replicates the original
behavior, ensuring compatibility with existing code and preserving the
extraction of content and metadata.
**Issue:**
This change addresses issues related to processing large HTML files with
the existing `HTMLHeaderTextSplitter` implementation. It resolves
problems where users encounter lxml.etree.xsltapplyerror maxhead due to
large HTML documents.
**Dependencies:**
- **BeautifulSoup (`beautifulsoup4`):** The `beautifulsoup4` library is
now used for parsing HTML content.
- Installation: `pip install beautifulsoup4`
**Code Changes:**
Updated the `split_text_from_file` method in `HTMLHeaderTextSplitter` as
follows:
```python
def split_text_from_file(self, file: Any) -> List[Document]:
"""Split HTML file using BeautifulSoup.
Args:
file: HTML file path or file-like object.
Returns:
List of Document objects with page_content and metadata.
"""
from bs4 import BeautifulSoup
from langchain.docstore.document import Document
import bs4
# Read the HTML content from the file or file-like object
if isinstance(file, str):
with open(file, 'r', encoding='utf-8') as f:
html_content = f.read()
else:
# Assuming file is a file-like object
html_content = file.read()
# Parse the HTML content using BeautifulSoup
soup = BeautifulSoup(html_content, 'html.parser')
# Extract the header tags and their corresponding metadata keys
headers_to_split_on = [tag[0] for tag in self.headers_to_split_on]
header_mapping = dict(self.headers_to_split_on)
documents = []
# Find the body of the document
body = soup.body if soup.body else soup
# Find all header tags in the order they appear
all_headers = body.find_all(headers_to_split_on)
# If there's content before the first header, collect it
first_header = all_headers[0] if all_headers else None
if first_header:
pre_header_content = ''
for elem in first_header.find_all_previous():
if isinstance(elem, bs4.Tag):
text = elem.get_text(separator=' ', strip=True)
if text:
pre_header_content = text + ' ' + pre_header_content
if pre_header_content.strip():
documents.append(Document(
page_content=pre_header_content.strip(),
metadata={} # No metadata since there's no header
))
else:
# If no headers are found, return the whole content
full_text = body.get_text(separator=' ', strip=True)
if full_text.strip():
documents.append(Document(
page_content=full_text.strip(),
metadata={}
))
return documents
# Process each header and its associated content
for header in all_headers:
current_metadata = {}
header_name = header.name
header_text = header.get_text(separator=' ', strip=True)
current_metadata[header_mapping[header_name]] = header_text
# Collect all sibling elements until the next header of the same or higher level
content_elements = []
for sibling in header.find_next_siblings():
if sibling.name in headers_to_split_on:
# Stop at the next header
break
if isinstance(sibling, bs4.Tag):
content_elements.append(sibling)
# Get the text content of the collected elements
current_content = ''
for elem in content_elements:
text = elem.get_text(separator=' ', strip=True)
if text:
current_content += text + ' '
# Create a Document if there is content
if current_content.strip():
documents.append(Document(
page_content=current_content.strip(),
metadata=current_metadata.copy()
))
else:
# If there's no content, but we have metadata, still create a Document
documents.append(Document(
page_content='',
metadata=current_metadata.copy()
))
return documents
```
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
**Description:**
With current HTML splitters, they rely on secondary use of the
`RecursiveCharacterSplitter` to further chunk the document into
manageable chunks. The issue with this is it fails to maintain important
structures such as tables, lists, etc within HTML.
This Implementation of a HTML splitter, allows the user to define a
maximum chunk size, HTML elements to preserve in full, options to
preserve `<a>` href links in the output and custom handlers.
The core splitting begins with headers, similar to `HTMLHeaderSplitter`.
If these sections exceed the length of the `max_chunk_size` further
recursive splitting is triggered. During this splitting, elements listed
to preserve, will be excluded from the splitting process. This can cause
chunks to be slightly larger then the max size, depending on preserved
length. However, all contextual relevance of the preserved item remains
intact.
**Custom Handlers**: Sometimes, companies such as Atlassian have custom
HTML elements, that are not parsed by default with `BeautifulSoup`.
Custom handlers allows a user to provide a function to be ran whenever a
specific html tag is encountered. This allows the user to preserve and
gather information within custom html tags that `bs4` will potentially
miss during extraction.
**Dependencies:** User will need to install `bs4` in their project to
utilise this class
I have also added in `how_to` and unit tests, which require `bs4` to
run, otherwise they will be skipped.
Flowchart of process:

---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
As seen in #23188, turned on Google-style docstrings by enabling
`pydocstyle` linting in the `text-splitters` package. Each resulting
linting error was addressed differently: ignored, resolved, suppressed,
and missing docstrings were added.
Fixes one of the checklist items from #25154, similar to #25939 in
`core` package. Ran `make format`, `make lint` and `make test` from the
root of the package `text-splitters` to ensure no issues were found.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
**Description:**
The `split_text_from_url` method of `HTMLHeaderTextSplitter` does not
include parameters like `timeout` when using `requests` to send a
request. Therefore, I suggest adding a `kwargs` parameter to the
function, which can be passed as arguments to `requests.get()`
internally, allowing control over the `get` request.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Update former pull request:
https://github.com/langchain-ai/langchain/pull/22654.
Modified `langchain_text_splitters.HTMLSectionSplitter`, where in the
latest version `dict` data structure is used to store sections from a
html document, in function `split_html_by_headers`. The header/section
element names serve as dict keys. This can be a problem when duplicate
header/section element names are present in a single html document.
Latter ones can replace former ones with the same name. Therefore some
contents can be miss after html text splitting is conducted.
Using a list to store sections can hopefully solve the problem. A Unit
test considering duplicate header names has been added.
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>
## Description
This PR allows passing the HTMLSectionSplitter paths to xslt files. It
does so by fixing two trivial bugs with how passed paths were being
handled. It also changes the default value of the param `xslt_path` to
`None` so the special case where the file was part of the langchain
package could be handled.
## Issue
#22175
- **Description:** the layout of html pages can be variant based on the
bootstrap framework or the styles of the pages. So we need to have a
splitter to transform the html tags to a proper layout and then split
the html content based on the provided list of tags to determine its
html sections. We are using BS4 library along with xslt structure to
split the html content using an section aware approach.
- **Dependencies:** No new dependencies
- **Twitter handle:** @m_setayesh
Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.
See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
-->
---------
Co-authored-by: Bagatur <baskaryan@gmail.com>