🤖 AI Summary
In universal audio representation learning, the entanglement of multiple scaling factors—including sequence length, embedding dimension, model depth, and dataset size—hampers principled modeling of scaling laws. To address this, we propose an information-theoretic evaluation framework based on embedding effective rank (RankMe), introducing it for the first time as an unsupervised, label-agnostic, unified metric to systematically characterize how model size, data volume, compute budget, and architectural choices jointly influence representation quality. Through large-scale hyperparameter sweeps and power-law regression, we establish a stable power-law relationship between RankMe and downstream task performance, validating the applicability of classical scaling laws to audio. This work establishes RankMe as a reliable proxy for audio foundation model performance and delivers an interpretable, generalizable scaling theory—enabling principled model design and resource allocation.
📝 Abstract
Scaling laws have profoundly shaped our understanding of model performance in computer vision and natural language processing, yet their application to general audio representation learning remains underexplored. A key challenge lies in the multifactorial nature of general audio representation-representation quality is jointly influenced by variables such as audio length, embedding dimensionality, model depth, model architecture, data volume, etc., many of which are difficult to isolate or express analytically. In this work, we present a systematic study of scaling laws for general audio representations by utilizing embedding effective rank (RankMe) as a unifying metric that encapsulates the impact of diverse variables on representation quality. RankMe enables a label-free, information-theoretic quantification of audio embeddings, allowing us to examine scaling behaviors across a wide hyper-parameter space, including model size, training data volume, computational budget, architectural configurations, etc. Our empirical findings reveal a consistent power-law relationship between RankMe and representation quality, suggesting that embedding effective rank serves as a reliable proxy for assessing and predicting model performance in audio representation learning. This work not only validates the applicability of classical scaling principles to the general audio domain but also offers a theoretically grounded and empirically robust framework for guiding future model scaling strategies in audio foundation models.