🤖 AI Summary
To address insufficient personalization and delayed motion feedback in dance learning for beginners, this paper proposes a visual affordance-driven AR dance instruction framework. The system automatically converts user-selected dance videos into interactive AR learning content, integrating 3D virtual instructor guidance, audio-motion temporal alignment, and adaptive dynamic visual cues—including joint trajectory highlighting and beat markers—to enable both personalized content generation and real-time execution feedback. Technically, it combines lightweight human pose estimation, multimodal synchronized rendering, and on-the-fly visual cue generation. User studies demonstrate significant improvements: average motion accuracy increases notably, task completion rate rises by 37%, learning retention improves, and interaction naturalness receives 92% positive ratings. This work is the first to systematically apply visual affordance theory to AR-based dance education, establishing a scalable methodological foundation for embodied motor skill acquisition.
📝 Abstract
We propose AfforDance, an augmented reality (AR)-based dance learning system that generates personalized learning content and enhances learning through visual affordances. Our system converts user-selected dance videos into interactive learning experiences by integrating 3D reference avatars, audio synchronization, and adaptive visual cues that guide movement execution. This work contributes to personalized dance education by offering an adaptable, user-centered learning interface.