MuQ: Self-Supervised Music Representation Learning with Mel Residual Vector Quantization

📅 2025-01-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the bottleneck of heavy reliance on large-scale manual annotations in music understanding tasks, this paper proposes MuQ, a self-supervised music representation learning framework. MuQ introduces Mel Residual Vector Quantization (Mel-RVQ), a novel mechanism that replaces conventional random projections or neural encoders-decoders with residual quantized tokens predicted from Mel spectrograms—enabling efficient and stable pretraining. Building upon MuQ, we further develop MuQ-MuLan, a cross-modal music–text joint embedding model supporting zero-shot music annotation. Trained on only 0.9K hours of open-source audio, MuQ already surpasses existing self-supervised learning (SSL) music models; scaling to 160K hours yields consistent performance gains. On the MagnaTagATune zero-shot tagging benchmark, MuQ-MuLan achieves state-of-the-art results, significantly reducing dependency on labeled data.

Technology Category

Application Category

📝 Abstract
Recent years have witnessed the success of foundation models pre-trained with self-supervised learning (SSL) in various music informatics understanding tasks, including music tagging, instrument classification, key detection, and more. In this paper, we propose a self-supervised music representation learning model for music understanding. Distinguished from previous studies adopting random projection or existing neural codec, the proposed model, named MuQ, is trained to predict tokens generated by Mel Residual Vector Quantization (Mel-RVQ). Our Mel-RVQ utilizes residual linear projection structure for Mel spectrum quantization to enhance the stability and efficiency of target extraction and lead to better performance. Experiments in a large variety of downstream tasks demonstrate that MuQ outperforms previous self-supervised music representation models with only 0.9K hours of open-source pre-training data. Scaling up the data to over 160K hours and adopting iterative training consistently improve the model performance. To further validate the strength of our model, we present MuQ-MuLan, a joint music-text embedding model based on contrastive learning, which achieves state-of-the-art performance in the zero-shot music tagging task on the MagnaTagATune dataset. Code and checkpoints are open source in https://github.com/tencent-ailab/MuQ.
Problem

Research questions and friction points this paper is trying to address.

Self-learning Model
Music Expression
Automatic Music Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mel-RVQ
self-supervised learning
zero-shot music tagging
🔎 Similar Papers
No similar papers found.