🤖 AI Summary
Existing spatial audio research is hindered by reliance on monaural datasets, limiting immersive multimodal modeling. To address this, we introduce SpatioAudio-4K—the first large-scale, multimodal spatial audio dataset—featuring synchronized binaural and Ambisonic audio, egocentric and exocentric video, and 6DoF motion trajectories, captured across four realistic domains: daily life, speech, music, and singing. The dataset includes fine-grained annotations—speech transcriptions, phonemes, lyrics, musical scores, and precise 3D sound source locations—enabling rigorous cross-modal alignment. We validate SpatioAudio-4K on key tasks including audio spatialization, spatial speech/singing/music synthesis, and sound event localization. Experimental results demonstrate substantial improvements in spatial perception fidelity and generative quality. By bridging critical gaps in data scale, modality diversity, and annotation richness, SpatioAudio-4K establishes a foundational resource for advancing multimodal spatial audio understanding and generation.
📝 Abstract
Humans rely on multisensory integration to perceive spatial environments, where auditory cues enable sound source localization in three-dimensional space. Despite the critical role of spatial audio in immersive technologies such as VR/AR, most existing multimodal datasets provide only monaural audio, which limits the development of spatial audio generation and understanding. To address these challenges, we introduce MRSAudio, a large-scale multimodal spatial audio dataset designed to advance research in spatial audio understanding and generation. MRSAudio spans four distinct components: MRSLife, MRSSpeech, MRSMusic, and MRSSing, covering diverse real-world scenarios. The dataset includes synchronized binaural and ambisonic audio, exocentric and egocentric video, motion trajectories, and fine-grained annotations such as transcripts, phoneme boundaries, lyrics, scores, and prompts. To demonstrate the utility and versatility of MRSAudio, we establish five foundational tasks: audio spatialization, and spatial text to speech, spatial singing voice synthesis, spatial music generation and sound event localization and detection. Results show that MRSAudio enables high-quality spatial modeling and supports a broad range of spatial audio research. Demos and dataset access are available at https://mrsaudio.github.io.