🤖 AI Summary
Music structure analysis (MSA) lacks systematic empirical evaluation of foundation audio encoders (FAEs), hindering understanding of how pretraining paradigms, training data, and context length affect structural understanding.
Method: We conduct a comprehensive benchmark of 11 FAEs on MSA tasks, employing controlled ablation studies, standardized structural annotation benchmarks (e.g., RWC-Pop, SALAMI), feature interpretability analysis, and cross-model attribution.
Contribution/Results: We find—firstly—that FAEs pretrained via self-supervised masked language modeling (MLM) significantly outperform supervised baselines in segment-level structural identification; secondly, that pretraining objectives exert greater influence on MSA performance than architectural choices. These results establish FAEs as highly transferable, fine-tuning-free representations for MSA, thereby filling a critical empirical gap in evaluating FAEs’ capacity for music structural understanding.
📝 Abstract
In music information retrieval (MIR) research, the use of pretrained foundational audio encoders (FAEs) has recently become a trend. FAEs pretrained on large amounts of music and audio data have been shown to improve performance on MIR tasks such as music tagging and automatic music transcription. However, their use for music structure analysis (MSA) remains underexplored. Although many open-source FAE models are available, only a small subset has been examined for MSA, and the impact of factors such as learning methods, training data, and model context length on MSA performance remains unclear. In this study, we conduct comprehensive experiments on 11 types of FAEs to investigate how these factors affect MSA performance. Our results demonstrate that FAEs using selfsupervised learning with masked language modeling on music data are particularly effective for MSA. These findings pave the way for future research in MSA.