Unify Variables in Neural Scaling Laws for General Audio Representations via Embedding Effective Rank

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In universal audio representation learning, the entanglement of multiple scaling factors—including sequence length, embedding dimension, model depth, and dataset size—hampers principled modeling of scaling laws. To address this, we propose an information-theoretic evaluation framework based on embedding effective rank (RankMe), introducing it for the first time as an unsupervised, label-agnostic, unified metric to systematically characterize how model size, data volume, compute budget, and architectural choices jointly influence representation quality. Through large-scale hyperparameter sweeps and power-law regression, we establish a stable power-law relationship between RankMe and downstream task performance, validating the applicability of classical scaling laws to audio. This work establishes RankMe as a reliable proxy for audio foundation model performance and delivers an interpretable, generalizable scaling theory—enabling principled model design and resource allocation.

Technology Category

Application Category

📝 Abstract
Scaling laws have profoundly shaped our understanding of model performance in computer vision and natural language processing, yet their application to general audio representation learning remains underexplored. A key challenge lies in the multifactorial nature of general audio representation-representation quality is jointly influenced by variables such as audio length, embedding dimensionality, model depth, model architecture, data volume, etc., many of which are difficult to isolate or express analytically. In this work, we present a systematic study of scaling laws for general audio representations by utilizing embedding effective rank (RankMe) as a unifying metric that encapsulates the impact of diverse variables on representation quality. RankMe enables a label-free, information-theoretic quantification of audio embeddings, allowing us to examine scaling behaviors across a wide hyper-parameter space, including model size, training data volume, computational budget, architectural configurations, etc. Our empirical findings reveal a consistent power-law relationship between RankMe and representation quality, suggesting that embedding effective rank serves as a reliable proxy for assessing and predicting model performance in audio representation learning. This work not only validates the applicability of classical scaling principles to the general audio domain but also offers a theoretically grounded and empirically robust framework for guiding future model scaling strategies in audio foundation models.
Problem

Research questions and friction points this paper is trying to address.

Unifying scaling variables for general audio representation learning challenges
Using embedding effective rank to quantify diverse audio representation variables
Establishing power-law relationship between RankMe and audio representation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unify variables via embedding effective rank
Use RankMe for label-free embedding quantification
Establish power-law relationship for performance prediction
🔎 Similar Papers
X
Xuyao Deng
College of Computer Science and Technology, National University of Defense Technology, Changsha 470000 China
Yanjie Sun
Yanjie Sun
The Hong Kong Polytechnic University
Cement hydrationseawater sea-sand concrete
Y
Yong Dou
College of Computer Science and Technology, National University of Defense Technology, Changsha 470000 China
K
Kele Xu
College of Computer Science and Technology, National University of Defense Technology, Changsha 470000 China