Explaining Grokking and Information Bottleneck through Neural Collapse Emergence

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the shared mechanistic underpinnings of grokking—where test performance surges abruptly after training loss convergence—and the information bottleneck—where models compress task-irrelevant input information. We propose “neural collapse” (i.e., progressive shrinkage of within-class variance in representation space) as a unifying geometric lens, establishing the first theoretical framework jointly explaining both phenomena. Our theoretical analysis shows that within-class variance reduction drives information compression and triggers generalization leaps, with its dynamics exhibiting cross-timescale coupling to grokking onset. Empirical validation across CIFAR-10/100, ImageNet, and diverse architectures (ResNet, ViT) demonstrates that neural collapse trajectories quantitatively predict generalization phase transitions. This reveals neural collapse as the core dynamical driver of late-stage generalization. The work establishes a novel paradigm for understanding implicit regularization in deep learning.

Technology Category

Application Category

📝 Abstract
The training dynamics of deep neural networks often defy expectations, even as these models form the foundation of modern machine learning. Two prominent examples are grokking, where test performance improves abruptly long after the training loss has plateaued, and the information bottleneck principle, where models progressively discard input information irrelevant to the prediction task as training proceeds. However, the mechanisms underlying these phenomena and their relations remain poorly understood. In this work, we present a unified explanation of such late-phase phenomena through the lens of neural collapse, which characterizes the geometry of learned representations. We show that the contraction of population within-class variance is a key factor underlying both grokking and information bottleneck, and relate this measure to the neural collapse measure defined on the training set. By analyzing the dynamics of neural collapse, we show that distinct time scales between fitting the training set and the progression of neural collapse account for the behavior of the late-phase phenomena. Finally, we validate our theoretical findings on multiple datasets and architectures.
Problem

Research questions and friction points this paper is trying to address.

Explaining grokking and information bottleneck phenomena in neural networks
Understanding delayed test performance improvement after training loss plateaus
Analyzing neural collapse dynamics underlying late-phase training behaviors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified explanation using neural collapse geometry
Identified within-class variance contraction as key factor
Analyzed distinct time scales between training phases
🔎 Similar Papers
No similar papers found.
K
Keitaro Sakamoto
Department of Computer Science, The University of Tokyo, Tokyo, Japan
Issei Sato
Issei Sato
University of Tokyo
Machine learning