Dual Perspectives on Non-Contrastive Self-Supervised Learning

📅 2025-06-18
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the fundamental question of why “stop-gradient” and “exponential moving average” (EMA) prevent representation collapse in non-contrastive self-supervised learning. We formulate the learning dynamics as a continuous-time dynamical system and, for the first time without auxiliary assumptions, rigorously prove that direct minimization of the original objective inevitably leads to representational degeneracy in the linear regime. In contrast, stop-gradient and EMA—though not optimizing the original objective nor any smooth surrogate—induce equilibrium points with asymptotic stability, thereby ensuring persistent evolution toward non-degenerate representations. By unifying optimization theory with dynamical systems analysis, we reveal the mechanistic essence of collapse prevention: rather than refining the optimization objective, these techniques restructure the stability landscape of the parameter space, endowing the dynamics with inherent robustness against collapse.

Technology Category

Application Category

📝 Abstract
The objective of non-contrastive approaches to self-supervised learning is to train on pairs of different views of the data an encoder and a predictor that minimize the mean discrepancy between the code predicted from the embedding of the first view and the embedding of the second one. In this setting, the stop gradient and exponential moving average iterative procedures are commonly used to avoid representation collapse, with excellent performance in downstream supervised applications. This presentation investigates these procedures from the dual theoretical viewpoints of optimization and dynamical systems. We first show that, in general, although they do not optimize the original objective, or for that matter, any other smooth function, they do avoid collapse. Following Tian et al. [2021], but without any of the extra assumptions used in their proofs, we then show using a dynamical system perspective that, in the linear case, minimizing the original objective function without the use of a stop gradient or exponential moving average always leads to collapse. Conversely, we finally show that the limit points of the dynamical systems associated with these two procedures are, in general, asymptotically stable equilibria, with no risk of degenerating to trivial solutions.
Problem

Research questions and friction points this paper is trying to address.

Analyzing stop gradient and EMA in non-contrastive self-supervised learning
Investigating collapse prevention mechanisms from optimization perspectives
Characterizing stable equilibria in linear self-supervised learning systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stop gradient prevents representation collapse
Exponential moving average stabilizes learning dynamics
Dynamical systems theory characterizes stable equilibria
🔎 Similar Papers
No similar papers found.
Jean Ponce
Jean Ponce
Ecole Normale Superieure/PSL Research University
computer visionmachine learningrobotics
B
Basile Terver
Meta FAIR INRIA Paris
M
Martial Hebert
Carnegie-Mellon University