🤖 AI Summary
This study addresses a critical limitation in current external human-machine interfaces (eHMIs) for autonomous vehicles, which predominantly convey only the vehicle’s own state and may inadvertently lead pedestrians to overlook surrounding environmental hazards. To mitigate this risk, the authors propose a novel projection-based attention-guiding eHMI (AGeHMI) that integrates an attention-guidance mechanism into eHMI design for the first time. By employing directional visual cues combined with risk-tiered color coding, AGeHMI actively directs pedestrians’ attention toward potential threats. Through a virtual reality user study leveraging projection-based visualization techniques, the research demonstrates that AGeHMI significantly improves the distribution of pedestrians’ visual attention, effectively reduces collision risk with surrounding vehicles, and simultaneously enhances subjective confidence while lowering cognitive load.
📝 Abstract
As autonomous vehicles are gradually being deployed in the real world, external Human-Machine Interfaces (eHMIs) are expected to serve as a critical solution for enhancing vehicle-pedestrian communication. However, existing eHMI designs typically focus solely on the ego vehicle's status, which can inadvertently capture pedestrians' attention or encourage misguided reliance on the AV's signals, leading them to neglect scanning for other surrounding hazards. To address this, we propose the Attention-Guiding eHMI (AGeHMI), a projection-based visualization that employs directional cues and risk-based color coding to actively guide pedestrians' attention toward potential environmental dangers. Evaluation through a virtual reality user study (N = 20) suggests that AGeHMI effectively influences participants' visual attention distribution and significantly reduces potential collision risks with surrounding vehicles, while simultaneously improving subjective confidence and reducing cognitive workload.