🤖 AI Summary
Educational videos often omit verbal descriptions of instructors’ visual gestures—such as pointing, labeling, and sketching—leading to information loss and excessive cognitive load for learners with low vision. To address this, we propose a motion-detection-based dynamic visual guidance method that identifies key pedagogical actions in real time. The approach integrates adaptive highlighting (preserving spatial context), localized magnification, and personalized feedback within a configurable assistive system. Its novelty lies in the synergistic integration of motion perception, multi-granularity visual enhancement, and user-adaptive strategies—ensuring both broad accessibility and flexibility across diverse instructional settings. User studies demonstrate significant improvements: among eight low-vision participants, action recognition speed increased by 37.2%, and subjective cognitive load decreased significantly (p < 0.01); notably, eight normally sighted users also exhibited enhanced focus and engagement, confirming its cross-population applicability.
📝 Abstract
Instructors often rely on visual actions such as pointing, marking, and sketching to convey information in educational presentation videos. These subtle visual cues often lack verbal descriptions, forcing low-vision (LV) learners to search for visual indicators or rely solely on audio, which can lead to missed information and increased cognitive load. To address this challenge, we conducted a co-design study with three LV participants and developed VeasyGuide, a tool that uses motion detection to identify instructor actions and dynamically highlight and magnify them. VeasyGuide produces familiar visual highlights that convey spatial context and adapt to diverse learners and content through extensive personalization and real-time visual feedback. VeasyGuide reduces visual search effort by clarifying what to look for and where to look. In an evaluation with 8 LV participants, learners demonstrated a significant improvement in detecting instructor actions, with faster response times and significantly reduced cognitive load. A separate evaluation with 8 sighted participants showed that VeasyGuide also enhanced engagement and attentiveness, suggesting its potential as a universally beneficial tool.