π€ AI Summary
Existing general-purpose audio pretraining is constrained by weak, noisy, and limited-scale labels, lacking a unified strong supervision framework. This work proposes the first Unified Tag System (UTS) that integrates speech, music, and environmental sounds, and establishes a high-fidelity audio captioning pipeline to enable a new pretraining paradigm centered on high-quality, strongly supervised data. Through systematic evaluation of multiple pretraining objectives within this framework, the study demonstrates that data quality and coverage are critical to performance gains, and further reveals that different pretraining objectives substantially influence the modelβs specialization capabilities across downstream tasks.
π Abstract
Current audio pre-training seeks to learn unified representations for broad audio understanding tasks, but it remains fragmented and is fundamentally bottlenecked by its reliance on weak, noisy, and scale-limited labels. Drawing lessons from vision's foundational pre-training blueprint, we argue that the audio field must first establish its own large-scale, strong supervision framework. We introduce a new data-centric pipeline that leverages a high-fidelity captioner to create SOTA-quality captions and the first Unified Tag System (UTS) that bridges speech, music, and environmental sounds. We then conduct a systematic comparative study of different pre-training objectives on these strong source data. Our experiments suggest that data quality and coverage are the primary drivers of performance, while the choice of objective dictates downstream task specialization.