🤖 AI Summary
To address the high variance of contrastive methods and the collapse-prone nature of non-contrastive approaches in self-supervised learning, this paper proposes Mutual Information-driven Non-Contrastive loss (MINC). MINC unifies the theoretical foundation of spectral contrastive loss with low-variance non-contrastive optimization—deriving a pairwise-comparison-free objective via a mutual information lower bound, and integrating momentum encoders, exponential moving average (EMA) parameter updates, and asymmetric network architectures. By eliminating both feature collapse and costly pairwise similarity computations, MINC significantly improves training stability without requiring ultra-large batch sizes. On ImageNet, MINC achieves superior downstream transfer performance compared to the original spectral contrastive baseline, while exhibiting substantially lower training variance. These results empirically validate both the theoretical consistency and practical efficacy of the proposed framework.
📝 Abstract
Labeling data is often very time consuming and expensive, leaving us with a majority of unlabeled data. Self-supervised representation learning methods such as SimCLR (Chen et al., 2020) or BYOL (Grill et al., 2020) have been very successful at learning meaningful latent representations from unlabeled image data, resulting in much more general and transferable representations for downstream tasks. Broadly, self-supervised methods fall into two types: 1) Contrastive methods, such as SimCLR; and 2) Non-Contrastive methods, such as BYOL. Contrastive methods are generally trying to maximize mutual information between related data points, so they need to compare every data point to every other data point, resulting in high variance, and thus requiring large batch sizes to work well. Non-contrastive methods like BYOL have much lower variance as they do not need to make pairwise comparisons, but are much trickier to implement as they have the possibility of collapsing to a constant vector. In this paper, we aim to develop a self-supervised objective that combines the strength of both types. We start with a particular contrastive method called the Spectral Contrastive Loss (HaoChen et al., 2021; Lu et al., 2024), and we convert it into a more general non-contrastive form; this removes the pairwise comparisons resulting in lower variance, but keeps the mutual information formulation of the contrastive method preventing collapse. We call our new objective the Mutual Information Non-Contrastive (MINC) loss. We test MINC by learning image representations on ImageNet (similar to SimCLR and BYOL) and show that it consistently improves upon the Spectral Contrastive loss baseline.