🤖 AI Summary
VR concerts suffer from insufficient social immersion due to limited user scale. To address this, we propose a cross-device social enhancement method: real-time collection of non-VR viewers’ chat messages (danmaku) from live-streaming platforms, followed by NLP-based sentiment analysis and multi-granularity engagement modeling to drive synchronized embodied behaviors—such as cheering and singing along—and spatial audio feedback among virtual audiences in VR. This work presents the first real-time translation of asynchronous livestream social signals into embodied, collective responses within VR, establishing an emotion-coupled, co-presence enhancement mechanism bridging physical and virtual audiences. The system integrates Unity VR, procedural virtual crowd behavior generation, and real-time spatial audio synthesis. A user study (n=48) demonstrates that audiovisual coordinated feedback significantly improves both immersion and co-presence (p<0.01), outperforming unimodal baselines.
📝 Abstract
Computer-mediated concerts can be enjoyed on various devices, from desktop and mobile to VR devices, often supporting multiple devices simultaneously. However, due to the limited accessibility of VR devices, relatively small audience members tend to congregate in VR venues, resulting in diminished unique social experiences. To address this gap and enrich VR concert experiences, we present a novel approach that leverages non-VR user interaction data, specifically chat from audiences watching the same content on a live-streaming platform. Based on an analysis of audience reactions in offline concerts, we designed and prototyped a concert interaction translation system that extracts the level of engagement and emotions from chats and translates them to collective movements, cheers, and singalongs of virtual audience avatars in a VR venue. Our user study (n=48) demonstrates that our system, which combines both movement and audio reactions, significantly enhances the sense of immersion and co-presence than the previous method.