Dripper: Token-Efficient Main HTML Extraction with a Lightweight LM

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in general-purpose web main-content extraction for large language model training data construction—including limited context windows, high inference costs, and formatting hallucinations—this paper proposes a lightweight HTML main-content extraction framework. Methodologically: (1) an HTML structure simplification algorithm reduces input redundancy; (2) a semantic block sequence classification task replaces end-to-end generation; and (3) a logits-processor-driven constrained decoding mechanism suppresses formatting errors. Contributions include: (i) releasing WebMainBench, the first high-quality benchmark dataset (7,800+ diverse web pages); and (ii) achieving state-of-the-art performance using only a 0.6B-parameter generative model—81.58% ROUGE-N F1 (83.13% with fallback), significantly outperforming all baselines. The framework enables efficient, accurate, and format-robust main-content extraction while substantially lowering computational overhead.

Technology Category

Application Category

📝 Abstract
Accurately and efficiently extracting main content from general web pages is of great significance for obtaining training data for large models. Using well-pre-trained decoder-only generative language models offers excellent document comprehension capabilities, thereby effectively enhancing parsing quality. However, it remains constrained by issues such as context window length, inference cost, and format hallucination. We present Dripper, an efficient HTML main content extraction framework powered by lightweight language models, which addresses these challenges through four key innovations: (1) We design a specialized HTML simplification algorithm that reduces input token count to 22% compared to raw HTML while preserving critical structural information; (2) We reformulate main content extraction as a semantic block sequence classification task, significantly reducing inference cost; (3) We introduce a controlled decoding mechanism that strictly constrains the output space through logits processors, effectively eliminating hallucination issues common in small-scale models; (4) We propose WebMainBench, an evaluation dataset containing over 7,800 web pages with meticulously human-annotated main content extraction labels. Experimental results demonstrate that using only a 0.6B parameter model, Dripper achieves state-of-the-art performance across all evaluation benchmarks and outperforms all baseline methods, attaining an ROUGE-N F1 score of 81.58%( 83.13% with fall-back strategy) on our proposed WebMainBench dataset.
Problem

Research questions and friction points this paper is trying to address.

Extracting main content from web pages efficiently for large model training
Overcoming context window limitations and high inference costs in parsing
Reducing format hallucination issues in lightweight language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

HTML simplification algorithm reduces token count
Semantic block sequence classification reduces inference cost
Controlled decoding mechanism eliminates hallucination issues
🔎 Similar Papers
No similar papers found.
Mengjie Liu
Mengjie Liu
AstraZeneca
Machine LearningSynthesis PlanningDrug Discovery
J
Jiahui Peng
Shanghai Artificial Intelligence Laboratory
P
Pei Chu
Shanghai Artificial Intelligence Laboratory
Jiantao Qiu
Jiantao Qiu
EE department of Tsinghua University
Ren Ma
Ren Ma
Shanghai AI Lab
LLM pretrainingRLHFNLP
H
He Zhu
Shanghai Artificial Intelligence Laboratory
Rui Min
Rui Min
Hong Kong University of Science and Technology
Machine LearningAgentTrustworthy AI
L
Lindong Lu
Shanghai Artificial Intelligence Laboratory
W
Wenchang Ning
Shanghai Artificial Intelligence Laboratory
L
Linfeng Hou
Shanghai Artificial Intelligence Laboratory
Kaiwen Liu
Kaiwen Liu
University of Michigan
Control TheoryRoboticsMachine LearningHuman-Robot Interactions
Y
Yuan Qu
Shanghai Artificial Intelligence Laboratory
Z
Zhenxiang Li
Shanghai Artificial Intelligence Laboratory
C
Chao Xu
Shanghai Artificial Intelligence Laboratory
Z
Zhongying Tu
Shanghai Artificial Intelligence Laboratory
Wentao Zhang
Wentao Zhang
Institute of Physics, Chinese Academy of Sciences
photoemissionsuperconductivitycupratehtsctime-resolved
Conghui He
Conghui He
Shanghai AI Laboratory
Data-centric AILLMDocument Intelligence