🤖 AI Summary
Current AR motion guidance design guidelines predominantly focus on upper-body, in-field-of-view movements, lacking empirical evidence for scenarios involving variable visibility (e.g., occlusion, limited viewpoints) and multi-planar motion (sagittal, coronal, transverse planes). This study systematically evaluates the impact of visual encoding strategies—such as trajectory lines, directional arrows, and hybrid solid-dashed cues—in combination with viewing perspectives (first-person, third-person, and mixed) on motion accuracy and usability, via controlled experiments across three representative full-body motion tasks. Results reveal that optimal perspective selection depends critically on motion visibility; excessive global motion information degrades performance; and specific encoding–perspective pairings significantly enhance guidance efficacy. Based on these findings, we propose the first empirically grounded design framework for AR-based motion guidance, enabling robust, context-adaptive full-body movement instruction across diverse real-world settings.
📝 Abstract
Augmented reality (AR) offers promising opportunities to support movement-based activities, such as personal training or physical therapy, with real-time, spatially-situated visual cues. While many approaches leverage AR to guide motion, existing design guidelines focus on simple, upper-body movements within the user's field of view. We lack evidence-based design recommendations for guiding more diverse scenarios involving movements with varying levels of visibility and direction. We conducted an experiment to investigate how different visual encodings and perspectives affect motion guidance performance and usability, using three exercises that varied in visibility and planes of motion. Our findings reveal significant differences in preference and performance across designs. Notably, the best perspective varied depending on motion visibility and showing more information about the overall motion did not necessarily improve motion execution. We provide empirically-grounded guidelines for designing immersive, interactive visualizations for motion guidance to support more effective AR systems.