🤖 AI Summary
This study investigates the mechanism by which augmented reality head-up display (AR-HUD) visual load induces inattentional blindness in drivers. Employing a high-ecological-validity on-road experiment, we integrated dynamic AR rendering, multi-level visual task load manipulation, real-time eye-tracking, and a standardized unexpected stimulus paradigm to quantify central-field inattentional blindness in naturalistic driving for the first time. Results demonstrate that higher AR visual load significantly reduces stimulus detection rates and prolongs reaction times; notably, ~68% of undetected stimuli occurred within the central visual field, confirming that AR graphics substantially capture core attentional resources. The key contribution lies in identifying a significant interaction between visual load and stimulus spatial location—revealing attentional allocation imbalance under AR-HUD use. These findings provide empirically grounded, quantitative thresholds to inform safer AR-HUD interface design and attention-aware human–machine interaction.
📝 Abstract
As the integration of augmented reality (AR) technology in head-up displays (HUDs) becomes more prevalent in vehicles, it is crucial to understand how to design and evaluate AR interfaces to ensure safety. With new AR displays capable of rendering images with larger field of views and at varying depths, the visual and cognitive separation between graphical and real-world visual stimuli will be increasingly more difficult to quantify as will drivers' ability to efficiently allocate visual attention between the two sets of stimuli. In this study, we present a user study that serves as a crucial first step in gaining insight into inattentional blindness while using AR in surface transportation, where understanding is currently limited. Our primary goal is to investigate how the visual demand of AR tasks influences drivers' ability to detect stimuli, and whether the nature of the stimuli itself plays a role in this effect. To address these questions, we designed an on-road user study aimed at producing a more realistic and ecologically valid understanding of the phenomenon. Our results show that drivers' ability to timely detect stimuli in the environment decreased as the AR task visual demand increased demonstrated by both detection performance and inattentional blindness metrics. Further, inattentional blindness caused by AR displays appears to be more prevalent within drivers' central field of view. We conclude by discussing implications towards a safety-centric evaluation framework for AR HUDs.