High-Quality Sound Separation Across Diverse Categories via Visually-Guided Generative Modeling

πŸ“… 2025-09-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing audio-visual sound source separation (AVSSS) methods predominantly rely on mask regression, which struggles to model complex acoustic distributions and thus limits high-fidelity, multi-class separation performance. To address this, we propose DAVISβ€”a novel generative framework that introduces vision-guided generative modeling to AVSSS for the first time, departing from conventional regression-based paradigms. DAVIS performs end-to-end generation of target-source spectrograms in the time-frequency domain. It integrates denoising diffusion probabilistic models (DDPMs) with flow matching (FM), and incorporates a dedicated separation U-Net architecture to enable fine-grained modeling of diverse sound distributions. Extensive experiments on the AVE and MUSIC benchmarks demonstrate that DAVIS significantly outperforms state-of-the-art methods across multiple classes and evaluation metrics, validating the effectiveness and superiority of the generative paradigm for cross-category, high-fidelity audio-visual separation.

Technology Category

Application Category

πŸ“ Abstract
We propose DAVIS, a Diffusion-based Audio-VIsual Separation framework that solves the audio-visual sound source separation task through generative learning. Existing methods typically frame sound separation as a mask-based regression problem, achieving significant progress. However, they face limitations in capturing the complex data distribution required for high-quality separation of sounds from diverse categories. In contrast, DAVIS circumvents these issues by leveraging potent generative modeling paradigms, specifically Denoising Diffusion Probabilistic Models (DDPM) and the more recent Flow Matching (FM), integrated within a specialized Separation U-Net architecture. Our framework operates by synthesizing the desired separated sound spectrograms directly from a noise distribution, conditioned concurrently on the mixed audio input and associated visual information. The inherent nature of its generative objective makes DAVIS particularly adept at producing high-quality sound separations for diverse sound categories. We present comparative evaluations of DAVIS, encompassing both its DDPM and Flow Matching variants, against leading methods on the standard AVE and MUSIC datasets. The results affirm that both variants surpass existing approaches in separation quality, highlighting the efficacy of our generative framework for tackling the audio-visual source separation task.
Problem

Research questions and friction points this paper is trying to address.

Solving audio-visual sound source separation via generative modeling
Generating separated sound spectrograms from mixed audio and visual inputs
Improving separation quality across diverse sound categories using diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses diffusion models for sound separation
Integrates visual cues with audio inputs
Generates spectrograms from noise distribution
πŸ”Ž Similar Papers
No similar papers found.