🤖 AI Summary
This work challenges the “instance consistency” assumption in self-supervised learning (SSL)—that different views of the same image necessarily share identical semantics—which fails on non-iconic data. We propose a novel perspective: view diversity, rather than strict consistency, enhances representation learning, and empirically find that *moderate* semantic divergence between views—not maximal divergence—yields optimal performance for downstream classification and dense prediction tasks. To quantify inter-view semantic distance, we introduce Earth Mover’s Distance (EMD) as a proxy estimator of mutual information. View diversity is then controllably modulated via multi-scale cropping with zero-overlap constraints. Extensive experiments across diverse benchmarks confirm the existence of an optimal diversity regime, enabling consistent and significant gains over conventional contrastive SSL baselines. Our findings establish a new paradigm for SSL, bridging theoretical insight—rethinking semantic alignment—with practical design principles for view generation.
📝 Abstract
Self-supervised learning (SSL) conventionally relies on the instance consistency paradigm, assuming that different views of the same image can be treated as positive pairs. However, this assumption breaks down for non-iconic data, where different views may contain distinct objects or semantic information. In this paper, we investigate the effectiveness of SSL when instance consistency is not guaranteed. Through extensive ablation studies, we demonstrate that SSL can still learn meaningful representations even when positive pairs lack strict instance consistency. Furthermore, our analysis further reveals that increasing view diversity, by enforcing zero overlapping or using smaller crop scales, can enhance downstream performance on classification and dense prediction tasks. However, excessive diversity is found to reduce effectiveness, suggesting an optimal range for view diversity. To quantify this, we adopt the Earth Mover's Distance (EMD) as an estimator to measure mutual information between views, finding that moderate EMD values correlate with improved SSL learning, providing insights for future SSL framework design. We validate our findings across a range of settings, highlighting their robustness and applicability on diverse data sources.