Self-Supervised Learning Method Using Multiple Sampling Strategies for General-Purpose Audio Representation

📅 2022-05-23
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Conventional self-supervised audio representation learning relies solely on clip-level sampling, leading to insufficient frame-level modeling capability. Method: This paper proposes a multi-granularity contrastive learning framework that jointly leverages clip-level, frame-level, and task-guided sampling to construct multi-perspective contrastive losses, enabling collaborative optimization of general-purpose audio representations. Contribution/Results: To our knowledge, this is the first work to incorporate both frame-level and task-specific sampling into self-supervised pre-training, overcoming the limitations of single-granularity representation learning. Pre-trained on a subset of AudioSet and evaluated via frozen-feature transfer to downstream tasks, our method achieves 25%, 20%, and 3.6% absolute improvements in clip classification, sound event detection, and pitch detection, respectively—demonstrating significantly enhanced fine-grained frame-level perception.

Technology Category

Application Category

📝 Abstract
We propose a self-supervised learning method using multiple sampling strategies to obtain general-purpose audio representation. Multiple sampling strategies are used in the proposed method to construct contrastive losses from different perspectives and learn representations based on them. In this study, in addition to the widely used clip-level sampling strategy, we introduce two new strategies, a frame-level strategy and a task-specific strategy. The proposed multiple strategies improve the performance of frame-level classification and other tasks like pitch detection, which are not the focus of the conventional single clip-level sampling strategy. We pre-trained the method on a subset of Audioset and applied it to a downstream task with frozen weights. The proposed method improved clip classification, sound event detection, and pitch detection performance by 25 %, 20 %, and 3.6 %.
Problem

Research questions and friction points this paper is trying to address.

Develop self-supervised learning for general audio representation
Improve frame-level classification via multi-strategy contrastive losses
Enhance pitch detection and sound event detection performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning with multiple sampling strategies
Introduces frame-level and task-specific sampling strategies
Improves performance in classification and pitch detection
🔎 Similar Papers
No similar papers found.
I
Ibuki Kuroyanagi
LINE Corporation, Tokyo, Japan; Nagoya University, Nagoya, Japan
Tatsuya Komatsu
Tatsuya Komatsu
LINE Corporation
Signal ProcessingSound Event DetectionSource Separation