Grid Labeling: Crowdsourcing Task-Specific Importance from Visualizations

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high cost and low inter-annotator consistency of visual importance labeling in task-driven visualization. We propose Grid Labeling—a crowdsourcing framework based on task-aware, dynamically adaptive spatial grid partitioning—to achieve low cognitive and physical labeling load while ensuring high annotation consistency. Our method integrates adaptive spatial segmentation, annotator behavior modeling, and human-subject evaluation (comparing ImportAnnots and BubbleView), significantly reducing both physical and cognitive load. Under equivalent annotation scale, it achieves the lowest labeling noise and highest cross-annotator agreement. To our knowledge, this is the first lightweight labeling paradigm that jointly optimizes structural adaptability and task semantics. It enables scalable, high-quality training data for task-oriented saliency prediction models, advancing visual importance modeling from stimulus-driven to task-driven paradigms.

Technology Category

Application Category

📝 Abstract
Knowing where people look in visualizations is key to effective design. Yet, existing research primarily focuses on free-viewing-based saliency models, even though visual attention is inherently task-dependent. Collecting task-relevant importance data remains a resource-intensive challenge. To address this, we introduce Grid Labeling, a novel annotation method for collecting task-specific importance data to enhance saliency prediction models. Grid Labeling dynamically segments visualizations into Adaptive Grids, enabling efficient, low-effort annotation while adapting to visualization structure. We conducted a human-subject study comparing Grid Labeling with existing annotation methods, ImportAnnots and BubbleView, across multiple metrics. Results show that Grid Labeling produces the least noisy data and the highest inter-participant agreement with fewer participants while requiring less physical (e.g., clicks/mouse movements) and cognitive effort.
Problem

Research questions and friction points this paper is trying to address.

Task-specific importance data collection
Enhancing saliency prediction models
Efficient and low-effort annotation method
Innovation

Methods, ideas, or system contributions that make the work stand out.

Grid Labeling annotation method
Adaptive Grids segmentation
Task-specific importance data collection
🔎 Similar Papers
No similar papers found.