SpA2V: Harnessing Spatial Auditory Cues for Audio-driven Spatially-aware Video Generation

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio-driven video generation methods rely solely on audio semantics (e.g., sound source categories) while neglecting spatial auditory cues—such as azimuth, elevation, and motion direction—resulting in severe deficiencies in spatial layout fidelity and dynamic consistency. This paper introduces the first video generation framework that explicitly models spatialized audio features, including interaural spectral differences and loudness gradients. Our method employs a multi-stage layout planning strategy: first, a multimodal large language model parses spatial audio semantics to generate a structured scene layout; then, this layout conditions a pre-trained diffusion model to synthesize videos—enabling layout-controllable generation without fine-tuning. The approach significantly improves both semantic accuracy and spatial alignment, outperforming state-of-the-art methods across multiple benchmarks. Generated videos exhibit precise sound-source localization and natural motion trajectories, aligning more closely with human audiovisual perception principles.

Technology Category

Application Category

📝 Abstract
Audio-driven video generation aims to synthesize realistic videos that align with input audio recordings, akin to the human ability to visualize scenes from auditory input. However, existing approaches predominantly focus on exploring semantic information, such as the classes of sounding sources present in the audio, limiting their ability to generate videos with accurate content and spatial composition. In contrast, we humans can not only naturally identify the semantic categories of sounding sources but also determine their deeply encoded spatial attributes, including locations and movement directions. This useful information can be elucidated by considering specific spatial indicators derived from the inherent physical properties of sound, such as loudness or frequency. As prior methods largely ignore this factor, we present SpA2V, the first framework explicitly exploits these spatial auditory cues from audios to generate videos with high semantic and spatial correspondence. SpA2V decomposes the generation process into two stages: 1) Audio-guided Video Planning: We meticulously adapt a state-of-the-art MLLM for a novel task of harnessing spatial and semantic cues from input audio to construct Video Scene Layouts (VSLs). This serves as an intermediate representation to bridge the gap between the audio and video modalities. 2) Layout-grounded Video Generation: We develop an efficient and effective approach to seamlessly integrate VSLs as conditional guidance into pre-trained diffusion models, enabling VSL-grounded video generation in a training-free manner. Extensive experiments demonstrate that SpA2V excels in generating realistic videos with semantic and spatial alignment to the input audios.
Problem

Research questions and friction points this paper is trying to address.

Generates videos with accurate spatial composition from audio
Exploits spatial auditory cues for semantic and spatial alignment
Bridges audio and video modalities using intermediate scene layouts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes spatial auditory cues for video generation
Decomposes process into audio-guided video planning
Integrates layouts into diffusion models training-free
🔎 Similar Papers
No similar papers found.