🤖 AI Summary
This work addresses the limitations of existing web text extraction methods, which rely on a single fixed extractor and consequently suffer from insufficient data coverage and low utilization. To overcome this, the authors propose a multi-extractor ensemble strategy that integrates multiple open-source HTML extractors by taking their union, complemented by an optimized data filtering pipeline and a mechanism to preserve structured content such as tables and code. Evaluated on the DCLM-Baseline, this approach increases token yield by up to 71% while delivering performance gains of up to 10 and 3 percentage points on downstream tasks WikiTQ and HumanEval, respectively. These results underscore the critical impact of extractor selection on the effectiveness of structured reasoning tasks.
📝 Abstract
One of the first pre-processing steps for constructing web-scale LLM pretraining datasets involves extracting text from HTML. Despite the immense diversity of web content, existing open-source datasets predominantly apply a single fixed extractor to all webpages. In this work, we investigate whether this practice leads to suboptimal coverage and utilization of Internet data. We first show that while different extractors may lead to similar model performance on standard language understanding tasks, the pages surviving a fixed filtering pipeline can differ substantially. This suggests a simple intervention: by taking a Union over different extractors, we can increase the token yield of DCLM-Baseline by up to 71% while maintaining benchmark performance. We further show that for structured content such as tables and code blocks, extractor choice can significantly impact downstream task performance, with differences of up to 10 percentage points (p.p.) on WikiTQ and 3 p.p. on HumanEval.