Vid2Coach: Transforming How-To Videos into Task Assistants

📅 2025-05-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Blind and low-vision (BLV) individuals face significant barriers in acquiring daily-life skills—such as cooking or exercising—from conventional visual instructional videos. Method: This work proposes a task-level active guidance framework for wearable smart glasses, integrating clinical vision rehabilitation practices with multimodal video understanding, retrieval-augmented generation (RAG), and real-time pose/action recognition. A domain-specific BLV knowledge base is constructed, and the system delivers hybrid non-visual feedback—including spatial audio cues and voice-driven commands—via the glasses’ onboard camera, enabling dynamic, task-aware interventions rather than passive narration. Contribution/Results: Unlike prior assistive systems, this is the first to deeply couple RAG with on-device visual perception for BLV task assistance. Experiments show a 58.5% reduction in user errors during cooking tasks; all eight participants expressed willingness to adopt it in daily life. The study validates an AI-augmented—not AI-replacement—paradigm for preserving and enhancing non-visual skill acquisition.

Technology Category

Application Category

📝 Abstract
People use videos to learn new recipes, exercises, and crafts. Such videos remain difficult for blind and low vision (BLV) people to follow as they rely on visual comparison. Our observations of visual rehabilitation therapists (VRTs) guiding BLV people to follow how-to videos revealed that VRTs provide both proactive and responsive support including detailed descriptions, non-visual workarounds, and progress feedback. We propose Vid2Coach, a system that transforms how-to videos into wearable camera-based assistants that provide accessible instructions and mixed-initiative feedback. From the video, Vid2Coach generates accessible instructions by augmenting narrated instructions with demonstration details and completion criteria for each step. It then uses retrieval-augmented-generation to extract relevant non-visual workarounds from BLV-specific resources. Vid2Coach then monitors user progress with a camera embedded in commercial smart glasses to provide context-aware instructions, proactive feedback, and answers to user questions. BLV participants (N=8) using Vid2Coach completed cooking tasks with 58.5% fewer errors than when using their typical workflow and wanted to use Vid2Coach in their daily lives. Vid2Coach demonstrates an opportunity for AI visual assistance that strengthens rather than replaces non-visual expertise.
Problem

Research questions and friction points this paper is trying to address.

Making how-to videos accessible for blind and low vision users
Providing non-visual workarounds and context-aware instructions
Reducing errors in task completion through AI assistance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates accessible instructions from how-to videos
Uses retrieval-augmented-generation for non-visual workarounds
Monitors progress via smart glasses for feedback
🔎 Similar Papers
No similar papers found.