DynFOA: Generating First-Order Ambisonics with Conditional Diffusion for Dynamic and Acoustically Complex 360-Degree Videos

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods struggle to generate high-fidelity first-order Ambisonics (FOA) audio in dynamic, acoustically complex 360-degree videos, often neglecting moving sound sources and environmental acoustic effects such as occlusion, reflection, and reverberation. This work proposes a novel framework that integrates 3D Gaussian Splatting with a conditional diffusion model: a video encoder detects dynamic sound sources and reconstructs scene geometry and materials, while an audio encoder models 4D sound source trajectories, enabling real-time synthesis of viewpoint-consistent, high-fidelity FOA. By introducing 3D Gaussian Splatting into spatial audio generation for the first time, the method significantly outperforms existing approaches in spatial accuracy, acoustic fidelity, and distributional alignment, thereby enhancing auditory immersion in virtual reality and other immersive media applications.

Technology Category

Application Category

📝 Abstract
Spatial audio is crucial for creating compelling immersive 360-degree video experiences. However, generating realistic spatial audio, such as first-order ambisonics (FOA), from 360-degree videos in complex acoustic scenes remains challenging. Existing methods often overlook the dynamic nature and acoustic complexity of 360-degree scenes, fail to fully account for dynamic sound sources, and neglect complex environmental effects such as occlusion, reflections, and reverberation, which are influenced by scene geometries and materials. We propose DynFOA, a framework based on dynamic acoustic perception and conditional diffusion, for generating high-fidelity FOA from 360-degree videos. DynFOA first performs visual processing via a video encoder, which detects and localizes multiple dynamic sound sources, estimates their depth and semantics, and reconstructs the scene geometry and materials using a 3D Gaussian Splatting. This reconstruction technique accurately models occlusion, reflections, and reverberation based on the geometries and materials of the reconstructed 3D scene and the listener's viewpoint. The audio encoder then captures the spatial motion and temporal 4D sound source trajectories to fine-tune the diffusion-based FOA generator. The fine-tuned FOA generator adjusts spatial cues in real time, ensuring consistent directional fidelity during listener head rotation and complex environmental changes. Extensive evaluations demonstrate that DynFOA consistently outperforms existing methods across metrics such as spatial accuracy, acoustic fidelity, and distribution matching, while also improving the user experience. Therefore, DynFOA provides a robust and scalable approach to rendering realistic dynamic spatial audio for VR and immersive media applications.
Problem

Research questions and friction points this paper is trying to address.

spatial audio
first-order ambisonics
360-degree video
acoustic complexity
dynamic sound sources
Innovation

Methods, ideas, or system contributions that make the work stand out.

conditional diffusion
first-order ambisonics
3D Gaussian Splatting
dynamic sound sources
spatial audio rendering
Z
Ziyu Luo
School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, China
L
Lin Chen
School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, China
Qiang Qu
Qiang Qu
Professor, Chinese Academy of Sciences, Shenzhen Institutes of Advanced Technology
BlockchainData IntelligenceData-intensive SystemsData Mining
X
Xiaoming Chen
School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, China
Yiran Shen
Yiran Shen
School of Software, Shandong University
Mobile computingVirtual reality