🤖 AI Summary
Blind and low-vision (BLV) individuals face significant barriers in accessing visually dominated video cooking tutorials. Method: This paper introduces a real-time, context-aware assistive system for cooking scenarios, integrating wearable camera video streams, online recipe videos, and user non-visual feedback (haptic, olfactory, gustatory) via multimodal joint modeling and contextual reasoning to dynamically align physical actions with video-based instructions. It features a novel hybrid proactive interaction mechanism: the AI both responds to user queries and autonomously triggers critical guidance based on real-time stream analysis. Results: Evaluation with eight BLV participants demonstrates statistically significant improvements in task completion rate and cooking independence. This work establishes a new paradigm—embodied, context-adaptive AI assistance—for visually impaired users, grounded in empirical validation and technical innovation in multimodal perception, alignment, and interactive reasoning.
📝 Abstract
Videos offer rich audiovisual information that can support people in performing activities of daily living (ADLs), but they remain largely inaccessible to blind or low-vision (BLV) individuals. In cooking, BLV people often rely on non-visual cues, such as touch, taste, and smell, to navigate their environment, making it difficult to follow the predominantly audiovisual instructions found in video recipes. To address this problem, we introduce AROMA, an AI system that provides timely responses to the user based on real-time, context-aware assistance by integrating non-visual cues perceived by the user, a wearable camera feed, and video recipe content. AROMA uses a mixed-initiative approach: it responds to user requests while also proactively monitoring the video stream to offer timely alerts and guidance. This collaborative design leverages the complementary strengths of the user and AI system to align the physical environment with the video recipe, helping the user interpret their current cooking state and make sense of the steps. We evaluated AROMA through a study with eight BLV participants and offered insights for designing interactive AI systems to support BLV individuals in performing ADLs.