π€ AI Summary
To address the insufficient robustness of Vision Transformers (ViTs) in intent prediction under uncertain driving scenarios, this paper proposes an attention-guided method integrating eye-tracking data. The core innovation is the fixation-attention intersection (FAX) loss function, which explicitly aligns ViT self-attention weights with human gaze distributions for the first time, enabling cognitively interpretable attention modeling. Our method jointly incorporates fixation map encoding, multi-head attention visualization, and dynamic similarity measurement, and is optimized end-to-end on both real-world and VR driving datasets. Experiments demonstrate significant improvements in left/right turn prediction accuracy, alongside enhanced spatial consistency between model-generated attention heatmaps and human gaze maps (Pearsonβs *r* > 0.82). This work establishes a novel, interpretable, and generalizable paradigm for driver behavior modeling and human-centered AI.
π Abstract
Vision Transformers (ViT) have advanced computer vision, yet their efficacy in complex tasks like driving remains less explored. This study enhances ViT by integrating human eye gaze, captured via eye-tracking, to increase prediction accuracy in driving scenarios under uncertainty in both real-world and virtual reality scenarios. First, we establish the significance of human eye gaze in left-right driving decisions, as observed in both human subjects and a ViT model. By comparing the similarity between human fixation maps and ViT attention weights, we reveal the dynamics of overlap across individual heads and layers. This overlap demonstrates that fixation data can guide the model in distributing its attention weights more effectively. We introduce the fixation-attention intersection (FAX) loss, a novel loss function that significantly improves ViT performance under high uncertainty conditions. Our results show that ViT, when trained with FAX loss, aligns its attention with human gaze patterns. This gaze-informed approach has significant potential for driver behavior analysis, as well as broader applications in human-centered AI systems, extending ViT's use to complex visual environments.