Investigating Encoding and Perspective for Augmented Reality

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AR motion guidance design guidelines predominantly focus on upper-body, in-field-of-view movements, lacking empirical evidence for scenarios involving variable visibility (e.g., occlusion, limited viewpoints) and multi-planar motion (sagittal, coronal, transverse planes). This study systematically evaluates the impact of visual encoding strategies—such as trajectory lines, directional arrows, and hybrid solid-dashed cues—in combination with viewing perspectives (first-person, third-person, and mixed) on motion accuracy and usability, via controlled experiments across three representative full-body motion tasks. Results reveal that optimal perspective selection depends critically on motion visibility; excessive global motion information degrades performance; and specific encoding–perspective pairings significantly enhance guidance efficacy. Based on these findings, we propose the first empirically grounded design framework for AR-based motion guidance, enabling robust, context-adaptive full-body movement instruction across diverse real-world settings.

Technology Category

Application Category

📝 Abstract
Augmented reality (AR) offers promising opportunities to support movement-based activities, such as personal training or physical therapy, with real-time, spatially-situated visual cues. While many approaches leverage AR to guide motion, existing design guidelines focus on simple, upper-body movements within the user's field of view. We lack evidence-based design recommendations for guiding more diverse scenarios involving movements with varying levels of visibility and direction. We conducted an experiment to investigate how different visual encodings and perspectives affect motion guidance performance and usability, using three exercises that varied in visibility and planes of motion. Our findings reveal significant differences in preference and performance across designs. Notably, the best perspective varied depending on motion visibility and showing more information about the overall motion did not necessarily improve motion execution. We provide empirically-grounded guidelines for designing immersive, interactive visualizations for motion guidance to support more effective AR systems.
Problem

Research questions and friction points this paper is trying to address.

Investigating AR visual encodings for diverse movement guidance
Addressing motion visibility challenges in augmented reality systems
Developing evidence-based design guidelines for interactive motion visualization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual encodings and perspectives tested for motion guidance
Exercises varied in visibility and motion planes
Empirical guidelines for immersive AR visualizations provided
🔎 Similar Papers
No similar papers found.
J
Jade Kandel
University of North Carolina at Chapel Hill
S
Sriya Kasumarthi
University of North Carolina at Chapel Hill
S
Spiros Tsalikis
University of North Carolina at Chapel Hill
C
Chelsea Duppen
University of North Carolina at Chapel Hill
Daniel Szafir
Daniel Szafir
University of North Carolina at Chapel Hill
Human-Robot InteractionHuman-Computer Interaction
Michael Lewek
Michael Lewek
UNC - Chapel Hill
Henry Fuchs
Henry Fuchs
Federico Gil Distinguished Professor of Computer Science, University of North Carolina
computer graphicsvirtual realityaugmented reality
D
Danielle Szafir
University of North Carolina at Chapel Hill