Unsupervised Single-Channel Audio Separation with Diffusion Source Priors

πŸ“… 2025-12-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Single-channel audio separation suffers from scarcity of high-quality paired training data and poor generalization. This paper proposes an unsupervised diffusion-based inverse problem framework: it formulates separation as a probabilistic inverse problem regularized by a diffusion prior, designs a novel iterative solver to mitigate gradient conflicts between prior guidance and reconstruction objectives, and introduces an enhanced mixed-signal initialization strategy to improve optimization stability and separation quality. The method integrates a time-frequency attention network with unsupervised optimization, requiring no ground-truth source signals for supervision. It achieves state-of-the-art performance on speech–sound event separation, sound event-only separation, and speech separation tasks. Quantitative and perceptual evaluations demonstrate significant improvements in fidelity, balance, and realism of separated outputs, validating strong cross-domain generalization capability.

Technology Category

Application Category

πŸ“ Abstract
Single-channel audio separation aims to separate individual sources from a single-channel mixture. Most existing methods rely on supervised learning with synthetically generated paired data. However, obtaining high-quality paired data in real-world scenarios is often difficult. This data scarcity can degrade model performance under unseen conditions and limit generalization ability. To this end, in this work, we approach this problem from an unsupervised perspective, framing it as a probabilistic inverse problem. Our method requires only diffusion priors trained on individual sources. Separation is then achieved by iteratively guiding an initial state toward the solution through reconstruction guidance. Importantly, we introduce an advanced inverse problem solver specifically designed for separation, which mitigates gradient conflicts caused by interference between the diffusion prior and reconstruction guidance during inverse denoising. This design ensures high-quality and balanced separation performance across individual sources. Additionally, we find that initializing the denoising process with an augmented mixture instead of pure Gaussian noise provides an informative starting point that significantly improves the final performance. To further enhance audio prior modeling, we design a novel time-frequency attention-based network architecture that demonstrates strong audio modeling capability. Collectively, these improvements lead to significant performance gains, as validated across speech-sound event, sound event, and speech separation tasks.
Problem

Research questions and friction points this paper is trying to address.

Unsupervised separation of single-channel audio mixtures
Mitigating gradient conflicts in diffusion-based inverse problems
Enhancing audio prior modeling with time-frequency attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised separation using diffusion source priors
Advanced solver mitigates gradient conflicts in denoising
Time-frequency attention network enhances audio prior modeling
πŸ”Ž Similar Papers
No similar papers found.
Runwu Shi
Runwu Shi
Institute of Science Tokyo
Signal processingIntelligent vehicle
C
Chang Li
University of Science and Technology of China
J
Jiang Wang
Department of Systems and Control Engineering, Institute of Science Tokyo
R
Rui Zhang
University of Hong Kong
N
Nabeela Khan
Department of Systems and Control Engineering, Institute of Science Tokyo
Benjamin Yen
Benjamin Yen
Institute of Science Tokyo
T
Takeshi Ashizawa
Department of Systems and Control Engineering, Institute of Science Tokyo
Kazuhiro Nakadai
Kazuhiro Nakadai
Institute of Science Tokyo
Robot Audition and Scene AnalysisArtificial IntelligenceSignal and Speech ProcessingRobotics