π€ AI Summary
Single-channel audio separation suffers from scarcity of high-quality paired training data and poor generalization. This paper proposes an unsupervised diffusion-based inverse problem framework: it formulates separation as a probabilistic inverse problem regularized by a diffusion prior, designs a novel iterative solver to mitigate gradient conflicts between prior guidance and reconstruction objectives, and introduces an enhanced mixed-signal initialization strategy to improve optimization stability and separation quality. The method integrates a time-frequency attention network with unsupervised optimization, requiring no ground-truth source signals for supervision. It achieves state-of-the-art performance on speechβsound event separation, sound event-only separation, and speech separation tasks. Quantitative and perceptual evaluations demonstrate significant improvements in fidelity, balance, and realism of separated outputs, validating strong cross-domain generalization capability.
π Abstract
Single-channel audio separation aims to separate individual sources from a single-channel mixture. Most existing methods rely on supervised learning with synthetically generated paired data. However, obtaining high-quality paired data in real-world scenarios is often difficult. This data scarcity can degrade model performance under unseen conditions and limit generalization ability. To this end, in this work, we approach this problem from an unsupervised perspective, framing it as a probabilistic inverse problem. Our method requires only diffusion priors trained on individual sources. Separation is then achieved by iteratively guiding an initial state toward the solution through reconstruction guidance. Importantly, we introduce an advanced inverse problem solver specifically designed for separation, which mitigates gradient conflicts caused by interference between the diffusion prior and reconstruction guidance during inverse denoising. This design ensures high-quality and balanced separation performance across individual sources. Additionally, we find that initializing the denoising process with an augmented mixture instead of pure Gaussian noise provides an informative starting point that significantly improves the final performance. To further enhance audio prior modeling, we design a novel time-frequency attention-based network architecture that demonstrates strong audio modeling capability. Collectively, these improvements lead to significant performance gains, as validated across speech-sound event, sound event, and speech separation tasks.