🤖 AI Summary
To address the challenge of constructing immersive stereo video for VR/AR, this paper introduces the first voxel video framework enabling mobile, synchronized multi-view audio-visual capture. Methodologically, we design a mobile multimodal acquisition system integrating 5K/60FPS high-speed imaging, spatial alignment, and precise temporal synchronization, and build an end-to-end multimodal voxel reconstruction pipeline. We further propose the first reconstruction benchmark and evaluation protocol tailored for 6-DoF immersive VR. Our contributions are threefold: (1) the release of ImViD—a novel immersive voxel video dataset featuring multiple dynamic scenes with 1–5 minute synchronized audio-visual sequences; (2) the establishment of the first 6-DoF multimodal VR reconstruction benchmark; and (3) empirical validation of baseline methods under high-fidelity rendering, large interactive volumes, and multimodal feedback—thereby establishing a new paradigm for immersive content generation.
📝 Abstract
User engagement is greatly enhanced by fully immersive multi-modal experiences that combine visual and auditory stimuli. Consequently, the next frontier in VR/AR technologies lies in immersive volumetric videos with complete scene capture, large 6-DoF interaction space, multi-modal feedback, and high resolution&frame-rate contents. To stimulate the reconstruction of immersive volumetric videos, we introduce ImViD, a multi-view, multi-modal dataset featuring complete space-oriented data capture and various indoor/outdoor scenarios. Our capture rig supports multi-view video-audio capture while on the move, a capability absent in existing datasets, significantly enhancing the completeness, flexibility, and efficiency of data capture. The captured multi-view videos (with synchronized audios) are in 5K resolution at 60FPS, lasting from 1-5 minutes, and include rich foreground-background elements, and complex dynamics. We benchmark existing methods using our dataset and establish a base pipeline for constructing immersive volumetric videos from multi-view audiovisual inputs for 6-DoF multi-modal immersive VR experiences. The benchmark and the reconstruction and interaction results demonstrate the effectiveness of our dataset and baseline method, which we believe will stimulate future research on immersive volumetric video production.