DynFOA: Generating First-Order Ambisonics with Conditional Diffusion for Dynamic and Acoustically Complex 360-Degree Videos

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the prevalent absence of spatial audio in existing 360-degree videos and the challenge of modeling dynamic sound sources alongside complex acoustic phenomena such as occlusion, reflection, and reverberation. To this end, the authors propose a novel approach that integrates dynamic scene reconstruction with a conditional diffusion model, uniquely incorporating geometric and material information derived from 3D Gaussian splatting into the audio generation pipeline. This integration enables joint modeling of moving sound sources and their interactions with the surrounding acoustic environment. Experimental results on the newly introduced M2G-360 dataset demonstrate that the proposed method significantly outperforms current state-of-the-art techniques in terms of spatial accuracy, acoustic fidelity, and overall immersion.
📝 Abstract
Spatial audio is crucial for immersive 360-degree video experiences, yet most 360-degree videos lack it due to the difficulty of capturing spatial audio during recording. Automatically generating spatial audio such as first-order ambisonics (FOA) from video therefore remains an important but challenging problem. In complex scenes, sound perception depends not only on sound source locations but also on scene geometry, materials, and dynamic interactions with the environment. However, existing approaches only rely on visual cues and fail to model dynamic sources and acoustic effects such as occlusion, reflections, and reverberation. To address these challenges, we propose DynFOA, a generative framework that synthesizes FOA from 360-degree videos by integrating dynamic scene reconstruction with conditional diffusion modeling. DynFOA analyzes the input video to detect and localize dynamic sound sources, estimate depth and semantics, and reconstruct scene geometry and materials using 3D Gaussian Splatting (3DGS). The reconstructed scene representation provides physically grounded features that capture acoustic interactions between sources, environment, and listener viewpoint. Conditioned on these features, a diffusion model generates spatial audio consistent with the scene dynamics and acoustic context. We introduce M2G-360, a dataset of 600 real-world clips divided into MoveSources, Multi-Source, and Geometry subsets for evaluating robustness under diverse conditions. Experiments show that DynFOA consistently outperforms existing methods in spatial accuracy, acoustic fidelity, distribution matching, and perceived immersive experience.
Problem

Research questions and friction points this paper is trying to address.

spatial audio
first-order ambisonics
360-degree video
acoustic effects
dynamic sound sources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conditional Diffusion
First-Order Ambisonics
3D Gaussian Splatting
Dynamic Sound Sources
Spatial Audio Synthesis
🔎 Similar Papers
No similar papers found.
Z
Ziyu Luo
School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, China
L
Lin Chen
School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, China
Qiang Qu
Qiang Qu
Professor, Chinese Academy of Sciences, Shenzhen Institutes of Advanced Technology
BlockchainData IntelligenceData-intensive SystemsData Mining
X
Xiaoming Chen
School of Computer and Artificial Intelligence, Beijing Technology and Business University, Beijing, China
Yiran Shen
Yiran Shen
School of Software, Shandong University
Mobile computingVirtual reality