π€ AI Summary
This work addresses the challenge of language-driven, physically plausible binaural audio generation. Methodologically, we propose the first spatial audio synthesis framework supporting multi-source dynamic sound fields and precise spatial control. Our approach introduces a spatially aware encoder and an azimuth-state matrix to guide latent diffusion models; constructs BEWO-1Mβthe first GPT-augmented, simulation-driven million-scale tri-modal dataset of spatial audio, text, and images; and integrates multimodal retrieval alignment with GPT-assisted data synthesis. Experimental results demonstrate that our method significantly outperforms existing approaches in both objective and subjective evaluations. Generated audio strictly adheres to acoustic physical principles, enabling high-fidelity, accurately localized, and trajectory-controllable immersive spatial audio synthesis. The framework effectively mitigates two key bottlenecks: weak multi-source spatial modeling capability and scarcity of high-quality spatial audio training data.
π Abstract
Recently, diffusion models have achieved great success in mono-channel audio generation. However, when it comes to stereo audio generation, the soundscapes often have a complex scene of multiple objects and directions. Controlling stereo audio with spatial contexts remains challenging due to high data costs and unstable generative models. To the best of our knowledge, this work represents the first attempt to address these issues. We first construct a large-scale, simulation-based, and GPT-assisted dataset, BEWO-1M, with abundant soundscapes and descriptions even including moving and multiple sources. Beyond text modality, we have also acquired a set of images and rationally paired stereo audios through retrieval to advance multimodal generation. Existing audio generation models tend to generate rather random and indistinct spatial audio. To provide accurate guidance for Latent Diffusion Models, we introduce the SpatialSonic model utilizing spatial-aware encoders and azimuth state matrices to reveal reasonable spatial guidance. By leveraging spatial guidance, our model not only achieves the objective of generating immersive and controllable spatial audio from text but also extends to other modalities as the pioneer attempt. Finally, under fair settings, we conduct subjective and objective evaluations on simulated and real-world data to compare our approach with prevailing methods. The results demonstrate the effectiveness of our method, highlighting its capability to generate spatial audio that adheres to physical rules.