Understanding and Enhancing Mask-Based Pretraining towards Universal Representations

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The mechanistic principles and theoretical limits of masked pretraining in multimodal representation learning remain poorly understood. To address this, we propose Randomly Random Mask Autoencoding (R²MAE), which dynamically randomizes the masking ratio during pretraining—departing from conventional fixed-ratio schemes—to compel models to learn multiscale features. Leveraging minimum-norm regression theory in high-dimensional linear models, we systematically characterize the behavior of masked autoencoding across diverse modalities—including language, vision, DNA sequences, and single-cell data—and validate its architectural generality across MLPs, CNNs, and Transformers. Extensive experiments demonstrate that R²MAE consistently outperforms standard and state-of-the-art masking strategies on cross-modal downstream tasks, yielding substantial gains in representation quality and generalization. Our work establishes the first unified theoretical framework for masked pretraining and introduces a scalable, principled paradigm for multimodal self-supervised learning.

Technology Category

Application Category

📝 Abstract
Mask-based pretraining has become a cornerstone of modern large-scale models across language, vision, and recently biology. Despite its empirical success, its role and limits in learning data representations have been unclear. In this work, we show that the behavior of mask-based pretraining can be directly characterized by test risk in high-dimensional minimum-norm ("ridge-less") linear regression, without relying on further model specifications. Further analysis of linear models uncovers several novel aspects of mask-based pretraining. The theoretical framework and its implications have been validated across diverse neural architectures (including MLPs, CNNs, and Transformers) applied to both vision and language tasks. Guided by our theory, we propose an embarrassingly simple yet overlooked pretraining scheme named Randomly Random Mask AutoEncoding (R$^2$MAE), which enforces capturing multi-scale features from data and is able to outperform optimal fixed mask ratio settings in our linear model framework. We implement R$^2$MAE in vision, language, DNA sequence, and single-cell models, where it consistently outperforms standard and more complicated masking schemes, leading to improvements for state-of-the-art models. Our code is available at: https://github.com/MingzeDong/r2mae
Problem

Research questions and friction points this paper is trying to address.

Characterizing mask-based pretraining behavior via linear regression
Uncovering novel aspects of mask pretraining through linear models
Proposing improved masking scheme for universal representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using linear regression to analyze mask-based pretraining
Proposing Randomly Random Mask AutoEncoding for multi-scale features
Validating across diverse architectures and outperforming standard masking
M
Mingze Dong
Yale University
L
Leda Wang
Yale University
Yuval Kluger
Yuval Kluger
Yale University