🤖 AI Summary
To address challenges in general-purpose web main-content extraction for large language model training data construction—including limited context windows, high inference costs, and formatting hallucinations—this paper proposes a lightweight HTML main-content extraction framework. Methodologically: (1) an HTML structure simplification algorithm reduces input redundancy; (2) a semantic block sequence classification task replaces end-to-end generation; and (3) a logits-processor-driven constrained decoding mechanism suppresses formatting errors. Contributions include: (i) releasing WebMainBench, the first high-quality benchmark dataset (7,800+ diverse web pages); and (ii) achieving state-of-the-art performance using only a 0.6B-parameter generative model—81.58% ROUGE-N F1 (83.13% with fallback), significantly outperforming all baselines. The framework enables efficient, accurate, and format-robust main-content extraction while substantially lowering computational overhead.
📝 Abstract
Accurately and efficiently extracting main content from general web pages is of great significance for obtaining training data for large models. Using well-pre-trained decoder-only generative language models offers excellent document comprehension capabilities, thereby effectively enhancing parsing quality. However, it remains constrained by issues such as context window length, inference cost, and format hallucination. We present Dripper, an efficient HTML main content extraction framework powered by lightweight language models, which addresses these challenges through four key innovations: (1) We design a specialized HTML simplification algorithm that reduces input token count to 22% compared to raw HTML while preserving critical structural information; (2) We reformulate main content extraction as a semantic block sequence classification task, significantly reducing inference cost; (3) We introduce a controlled decoding mechanism that strictly constrains the output space through logits processors, effectively eliminating hallucination issues common in small-scale models; (4) We propose WebMainBench, an evaluation dataset containing over 7,800 web pages with meticulously human-annotated main content extraction labels. Experimental results demonstrate that using only a 0.6B parameter model, Dripper achieves state-of-the-art performance across all evaluation benchmarks and outperforms all baseline methods, attaining an ROUGE-N F1 score of 81.58%( 83.13% with fall-back strategy) on our proposed WebMainBench dataset.