On Temporal Guidance and Iterative Refinement in Audio Source Separation

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional two-stage audio source separation approaches—first detecting sound events then separating sources—struggle in complex acoustic mixtures due to insufficient fine-grained temporal modeling. To address this, we propose a time-varying collaborative framework: (1) a fine-tuned pre-trained Transformer performs high-accuracy, frame-level sound event detection (SED) to generate dynamic temporal guidance signals; (2) an iterative refinement separation network explicitly models temporal dynamics by jointly incorporating label-conditioned constraints and recursive output feedback. Evaluated on DCASE 2025 Task 4, our method achieves second place, significantly improving both SED F1-score (+2.1%) and separation quality (SI-SNRi +1.8 dB) over strong baselines. Results demonstrate that time-aware collaborative modeling effectively bridges the gap between detection and separation, enabling more accurate and temporally coherent joint optimization.

Technology Category

Application Category

📝 Abstract
Spatial semantic segmentation of sound scenes (S5) involves the accurate identification of active sound classes and the precise separation of their sources from complex acoustic mixtures. Conventional systems rely on a two-stage pipeline - audio tagging followed by label-conditioned source separation - but are often constrained by the absence of fine-grained temporal information critical for effective separation. In this work, we address this limitation by introducing a novel approach for S5 that enhances the synergy between the event detection and source separation stages. Our key contributions are threefold. First, we fine-tune a pre-trained Transformer to detect active sound classes. Second, we utilize a separate instance of this fine-tuned Transformer to perform sound event detection (SED), providing the separation module with detailed, time-varying guidance. Third, we implement an iterative refinement mechanism that progressively enhances separation quality by recursively reusing the separator's output from previous iterations. These advancements lead to significant improvements in both audio tagging and source separation performance, as demonstrated by our system's second-place finish in Task 4 of the DCASE Challenge 2025. Our implementation and model checkpoints are available in our GitHub repository: https://github.com/theMoro/dcase25task4 .
Problem

Research questions and friction points this paper is trying to address.

Improves audio source separation with temporal guidance
Enhances synergy between event detection and separation
Uses iterative refinement to boost separation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned Transformer for sound class detection
Time-varying SED guidance for separation module
Iterative refinement mechanism enhances separation quality
🔎 Similar Papers
No similar papers found.