Prune4Web: DOM Tree Pruning Programming for Web Agent

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency and semantic information loss incurred by large-scale DOM trees (10k–100k nodes) in web navigation agents, this paper introduces *programmatic DOM pruning*—a novel paradigm that replaces direct LLM-based DOM parsing with the generation of executable Python scoring scripts to dynamically filter semantically relevant elements. The method integrates task decomposition, semantic cue guidance, and a two-turn dialog-based script generation process, jointly optimizing the planner, programmatic filter, and grounding module. On low-level grounding tasks, our approach achieves an accuracy improvement from 46.8% to 88.28%, while reducing candidate elements by 25–50×. It significantly outperforms baseline methods—including DOM truncation, heuristic filtering, and standalone ranking—and is the first to simultaneously achieve high accuracy and strong scalability for large DOMs.

Technology Category

Application Category

📝 Abstract
Web automation employs intelligent agents to execute high-level tasks by mimicking human interactions with web interfaces. Despite the capabilities of recent Large Language Model (LLM)-based web agents, navigating complex, real-world webpages efficiently remains a significant hurdle due to the prohibitively large size of Document Object Model (DOM) structures, often ranging from 10,000 to 100,000 tokens. Existing strategies typically rely on crude DOM truncation -- risking the loss of critical information -- or employ inefficient heuristics and separate ranking models, failing to achieve an optimal balance between precision and scalability. To address these challenges, we introduce Prune4Web, a novel paradigm that shifts DOM processing from resource-intensive LLM reading to efficient programmatic pruning. Central to our approach is DOM Tree Pruning Programming, where an LLM generates executable Python scoring scripts to dynamically filter DOM elements based on semantic cues from decomposed sub-tasks. This mechanism eliminates the need for LLMs to ingest raw, massive DOMs, instead delegating traversal and scoring to lightweight, interpretable programs. This methodology achieves a 25x to 50x reduction in candidate elements for grounding, thereby facilitating precise action localization while mitigating attention dilution. Furthermore, we propose a specialized data annotation pipeline and a two-turn dialogue training strategy that jointly optimizes the Planner, Programmatic Filter, and Grounder within a unified framework. Extensive experiments demonstrate state-of-the-art performance. Notably, on our low-level grounding task, Prune4Web dramatically improves accuracy from 46.8% to 88.28%, underscoring its efficacy in real-world web automation.
Problem

Research questions and friction points this paper is trying to address.

Addresses inefficient DOM navigation in web automation due to large token sizes
Reduces reliance on crude DOM truncation risking critical information loss
Shifts DOM processing from LLM reading to programmatic pruning for scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates Python scripts for DOM pruning
Shifts from LLM reading to programmatic filtering
Reduces candidate elements by 25x to 50x
🔎 Similar Papers
No similar papers found.
Jiayuan Zhang
Jiayuan Zhang
Beihang University
Federated Learning
K
Kaiquan Chen
School of Software & QRI, Beihang University, Beijing, China
Z
Zhihao Lu
School of Software & QRI, Beihang University, Beijing, China
Enshen Zhou
Enshen Zhou
Beihang University
Embodied AIEmbodied AgentRobot LearningGenerative Model
Qian Yu
Qian Yu
Professor, Dept of Earth, Geographic, and Climate Sciences, University of Massachusetts-Amherst
GISremote sensingSpatial modeling
J
Jing Zhang
School of Software & QRI, Beihang University, Beijing, China