🤖 AI Summary
This work investigates the generalization performance and spectral structure of matrix-valued predictors formed by aggregating multiple masks in masked self-supervised learning under high-dimensional settings. Leveraging random matrix theory within an asymptotic framework where sample size and dimension grow proportionally, the study establishes the first high-dimensional theoretical analysis for masked self-supervised learning. The core contributions include deriving an explicit expression for the generalization error, characterizing the spectral properties of the aggregated predictor, revealing a BBP-type phase transition under spiked covariance models, and identifying the precise threshold conditions under which latent signals can be effectively recovered. The analysis further provides theoretical evidence that this approach outperforms classical PCA in certain structured scenarios.
📝 Abstract
In the era of transformer models, masked self-supervised learning (SSL) has become a foundational training paradigm. A defining feature of masked SSL is that training aggregates predictions across many masking patterns, giving rise to a joint, matrix-valued predictor rather than a single vector-valued estimator. This object encodes how coordinates condition on one another and poses new analytical challenges. We develop a precise high-dimensional analysis of masked modeling objectives in the proportional regime where the number of samples scales with the ambient dimension. Our results provide explicit expressions for the generalization error and characterize the spectral structure of the learned predictor, revealing how masked modeling extracts structure from data. For spiked covariance models, we show that the joint predictor undergoes a Baik--Ben Arous--P\'ech\'e (BBP)-type phase transition, identifying when masked SSL begins to recover latent signals. Finally, we identify structured regimes in which masked self-supervised learning provably outperforms PCA, highlighting potential advantages of SSL objectives over classical unsupervised methods