🤖 AI Summary
This work addresses the challenge that existing sound separation methods often learn spurious correlations in complex acoustic scenes due to weak labels and event co-occurrence, leading to poor target sound isolation. To overcome this, the authors propose a new paradigm prioritizing supervision signal purity: leveraging a semantically consistent synthesis protocol, they automatically mine high-purity single-event segments from in-the-wild data to construct Hive, a compact yet high-quality synthetic dataset of only 2.4k hours. A query-based universal sound separation model trained on Hive achieves remarkable data efficiency, drastically reducing reliance on large-scale data and computational resources. It matches or surpasses state-of-the-art models trained on 500× more data in both separation accuracy and perceptual quality, while also demonstrating exceptional zero-shot generalization capabilities.
📝 Abstract
Query-based universal sound separation is fundamental to intelligent auditory systems, aiming to isolate specific sources from mixtures. Despite recent advances, existing methods continue to suffer from residual interference in complex acoustic scenes. This performance limitation stems largely from a data bottleneck: in-the-wild datasets contain weak labels and severe co-occurrence of events. These flaws induce models to learn spurious correlations between background noise and target categories instead of robust acoustic features. To address this, we propose an automated pipeline that eliminates co-occurrence of events by mining high-purity single-event segments from in-the-wild datasets via a semantically consistent synthesis protocol. Utilizing this pipeline, we constructed Hive, a high-quality synthetic dataset comprising 2.4k hours of raw audio. Experimental results demonstrate that, compared with the state-of-the-art model SAM-Audio which was trained on a huge dataset $\sim$500 times larger than Hive, certain open-source models trained on Hive achieve competitive separation accuracy and perceptual quality. Moreover, these models exhibited remarkable zero-shot generalization on out-of-distribution evaluation benchmarks. These findings highlight that prioritizing purity of supervised signals enables significant data efficiency, offering a new paradigm for training robust auditory foundation models with reduced computational costs. Code and dataset are available at https://shandaai.github.io/Hive.