Where, What, Why: Towards Explainable Driver Attention Prediction

📅 2025-06-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing driver attention prediction methods localize only gaze points (“where”), lacking joint modeling of cognitive motivations (“why”) and semantic content (“what”), thereby limiting mechanistic interpretability. Method: We propose the first explainable driver cognitive attention prediction task, unifying “where-what-why” reasoning. To this end, we introduce W3DA—a large-scale, multi-scenario dataset—and design LLada, an end-to-end framework integrating pixel-level visual encoding, semantic segmentation, and large language model–driven causal reasoning for joint attention prediction and attribution. Contribution/Results: Experiments demonstrate that LLada significantly outperforms state-of-the-art methods across multiple benchmarks, exhibits strong cross-scenario generalization, and—uniquely—enables systematic, interpretable modeling of driver cognitive attention. This work establishes a new paradigm for intelligent cockpit design and human factors research.

Technology Category

Application Category

📝 Abstract
Modeling task-driven attention in driving is a fundamental challenge for both autonomous vehicles and cognitive science. Existing methods primarily predict where drivers look by generating spatial heatmaps, but fail to capture the cognitive motivations behind attention allocation in specific contexts, which limits deeper understanding of attention mechanisms. To bridge this gap, we introduce Explainable Driver Attention Prediction, a novel task paradigm that jointly predicts spatial attention regions (where), parses attended semantics (what), and provides cognitive reasoning for attention allocation (why). To support this, we present W3DA, the first large-scale explainable driver attention dataset. It enriches existing benchmarks with detailed semantic and causal annotations across diverse driving scenarios, including normal conditions, safety-critical situations, and traffic accidents. We further propose LLada, a Large Language model-driven framework for driver attention prediction, which unifies pixel modeling, semantic parsing, and cognitive reasoning within an end-to-end architecture. Extensive experiments demonstrate the effectiveness of LLada, exhibiting robust generalization across datasets and driving conditions. This work serves as a key step toward a deeper understanding of driver attention mechanisms, with significant implications for autonomous driving, intelligent driver training, and human-computer interaction.
Problem

Research questions and friction points this paper is trying to address.

Predicts where drivers look in driving scenarios
Identifies what drivers focus on semantically
Explains why drivers allocate attention cognitively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly predicts where, what, and why
Introduces W3DA large-scale dataset
Proposes LLada end-to-end framework
🔎 Similar Papers
No similar papers found.