π€ AI Summary
This study addresses the cognitive interference caused by traditional symbolic directional cues, which require semantic parsing and divert attention from primary tasks. To mitigate this issue, the authors propose a motion-based peripheral visual stimulus delivered monocularly through wearable displays, leveraging humansβ innate sensitivity to motion for low-intrusion directional guidance. By capitalizing on perceptual psychology and human factors principles, this approach eliminates the need for gaze shifts and symbolic interpretation. In a high-workload dual-task experiment, the method significantly improved directional judgment accuracy (p = .008) and showed a trend toward reducing errors in the primary task (p = .066), demonstrating its potential as an efficient, non-intrusive solution for directional cueing in demanding operational environments.
π Abstract
Directional cues are crucial for environmental interaction. Conventional methods rely on symbolic visual or auditory reminders that require semantic interpretation, a process that proves challenging in demanding dual-tasking scenarios. We introduce a novel alternative for conveying directional cues on wearable displays: directly triggering motion perception using monocularly presented peripheral stimuli. This approach is designed for low visual interference, with the goal of reducing the need for gaze-switching and the complex cognitive processing associated with symbols. User studies demonstrate our method's potential to robustly convey directional cues. Compared to a conventional arrow-based technique in a demanding dual-task scenario, our motion-based approach resulted in significantly more accurate interpretation of these directional cues (p=.008) and showed a trend towards reduced errors on the concurrent primary task (p=.066).