🤖 AI Summary
Blind and low-vision (BLV) users face significant challenges in interactively navigating 360° videos due to the absence of autonomous viewpoint control. Method: This paper introduces the first branching narrative framework tailored for BLV users, which automatically decomposes 360° video into voice-guided, multi-path narratives. Leveraging a multimodal machine learning pipeline, the system identifies semantic branch points, generates coherent narrative trajectories, and integrates spatial audio with real-time voice interaction to enable immersive, accessible navigation. Contribution/Results: An empirical evaluation with 12 BLV participants demonstrated statistically significant improvements in viewing autonomy (p < 0.01), a 47% increase in engagement, and emergent personalized exploration strategies. This work establishes the first accessible, controllable, and immersive interaction paradigm for 360° video among BLV users, providing a novel methodology and technical foundation for inclusive immersive media.
📝 Abstract
360° videos enable users to freely choose their viewing paths, but blind and low vision (BLV) users are often excluded from this interactive experience. To bridge this gap, we present Branch Explorer, a system that transforms 360° videos into branching narratives -- stories that dynamically unfold based on viewer choices -- to support interactive viewing for BLV audiences. Our formative study identified three key considerations for accessible branching narratives: providing diverse branch options, ensuring coherent story progression, and enabling immersive navigation among branches. To address these needs, Branch Explorer employs a multi-modal machine learning pipeline to generate diverse narrative paths, allowing users to flexibly make choices at detected branching points and seamlessly engage with each storyline through immersive audio guidance. Evaluation with 12 BLV viewers showed that Branch Explorer significantly enhanced user agency and engagement in 360° video viewing. Users also developed personalized strategies for exploring 360° content. We further highlight implications for supporting accessible exploration of videos and virtual environments.