π€ AI Summary
This work addresses the lack of effective visual feedback for gaze-based selection in existing always-on monocular AR devices, which is hindered by limited eye-tracking accuracy and ambiguous target identification. To overcome these challenges, the authors propose PeriphARβa novel system that leverages the human peripheral vision mechanism to enable low-intrusion gaze interaction. PeriphAR dynamically optimizes highlight color contrast through a chromatic difference maximization strategy among neighboring objects and augments pre-attentive processing with shape and textual cues. Integrating eye tracking, a peripheral vision model, real-time object detection, and AR rendering, the system significantly improves selection efficiency, accuracy, and user preference, as demonstrated in user studies conducted in real-world environments, thereby validating its effectiveness and practicality.
π Abstract
Gaze-based selection in XR requires visual confirmation due to eye-tracking limitations and target ambiguity in 3D contexts. Current designs for wide-FOV displays use world-locked, central overlays, which are not conducive to always-on AR glasses. This paper introduces PeriphAR (per-ree-far), a visualization technique that leverages peripheral vision for feedback during gaze-based selection on a monocular AR display. In a first user study, we isolated text, color, and shape properties of target objects to compare peripheral selection cues. Peripheral vision was more sensitive to color than shape, but this sensitivity rapidly declined at lower contrast. To preserve preattentive processing of color, we developed two strategies to enhance color in users' peripheral vision. In a second user study, our strategy that maximized contrast of the target to the neighboring object with the most similar color was subjectively preferred. As proof of concept, we implemented PeriphAR in an end-to-end system to test performance with real-world object detection.