🤖 AI Summary
This study addresses the challenges faced by visually impaired individuals in learning physical movements—such as yoga or gymnastics—due to the lack of effective non-visual instructional tools. To bridge this gap, the authors introduce a novel approach that integrates high-fidelity 3D tactile human models with a user-centered participatory design process, resulting in custom 3D-printed models for blind learners. These models incorporate tactile markers to represent both static postures and continuous motion sequences. Findings from user studies demonstrate that, compared to conventional teaching methods, the proposed models significantly improve the speed and accuracy of movement comprehension, reduce learner uncertainty, and receive higher ratings in usability and motivation. The results indicate that the tactile models effectively enhance spatial awareness and facilitate more effective motor learning among visually impaired users.
📝 Abstract
Visual impairments create barriers to learning physical activities, since conventional training methods rely on visual demonstrations or often inadequate verbal descriptions. This research explores 3D-printed human body models to enhance movement comprehension for blind individuals. Through a participatory design approach in collaboration with a blind designer, we developed detailed 3D models representing various body movements and incorporated tactile reference elements to enhance spatial understanding. We conducted two user studies with 10 blind participants across different activities: static yoga poses and sequential calisthenic movements. The results demonstrated that 3D models significantly improved understanding speed, reduced questions for clarification, and enhanced movement accuracy compared to conventional teaching methods. Participants consistently rated 3D models higher for ease of understanding, effectiveness, and motivation.