🤖 AI Summary
Conventional self-supervised audio representation learning relies solely on clip-level sampling, leading to insufficient frame-level modeling capability. Method: This paper proposes a multi-granularity contrastive learning framework that jointly leverages clip-level, frame-level, and task-guided sampling to construct multi-perspective contrastive losses, enabling collaborative optimization of general-purpose audio representations. Contribution/Results: To our knowledge, this is the first work to incorporate both frame-level and task-specific sampling into self-supervised pre-training, overcoming the limitations of single-granularity representation learning. Pre-trained on a subset of AudioSet and evaluated via frozen-feature transfer to downstream tasks, our method achieves 25%, 20%, and 3.6% absolute improvements in clip classification, sound event detection, and pitch detection, respectively—demonstrating significantly enhanced fine-grained frame-level perception.
📝 Abstract
We propose a self-supervised learning method using multiple sampling strategies to obtain general-purpose audio representation. Multiple sampling strategies are used in the proposed method to construct contrastive losses from different perspectives and learn representations based on them. In this study, in addition to the widely used clip-level sampling strategy, we introduce two new strategies, a frame-level strategy and a task-specific strategy. The proposed multiple strategies improve the performance of frame-level classification and other tasks like pitch detection, which are not the focus of the conventional single clip-level sampling strategy. We pre-trained the method on a subset of Audioset and applied it to a downstream task with frozen weights. The proposed method improved clip classification, sound event detection, and pitch detection performance by 25 %, 20 %, and 3.6 %.