🤖 AI Summary
Existing normalization methods (e.g., BatchNorm, LayerNorm, RMSNorm) stabilize training solely via zero-mean and unit-variance constraints, neglecting explicit preservation of task-relevant information and suppression of task-irrelevant variation in representations. To address this, we propose IBNorm—the first normalization framework grounded in the Information Bottleneck (IB) principle. IBNorm introduces a bounded compression mechanism that explicitly optimizes mutual information among input, representation, and output, thereby enhancing retention of task-critical features while suppressing redundant variability. We theoretically establish that IBNorm achieves a superior IB objective value and a tighter generalization bound compared to conventional normalization. Empirically, IBNorm serves as a drop-in replacement for standard normalizers across diverse architectures—including LLaMA, GPT-2, ResNet, and ViT—and consistently outperforms state-of-the-art methods on both language and vision benchmarks. These results validate its stronger representation learning capability and closer adherence to IB-theoretic principles.
📝 Abstract
Normalization is fundamental to deep learning, but existing approaches such as BatchNorm, LayerNorm, and RMSNorm are variance-centric by enforcing zero mean and unit variance, stabilizing training without controlling how representations capture task-relevant information. We propose IB-Inspired Normalization (IBNorm), a simple yet powerful family of methods grounded in the Information Bottleneck principle. IBNorm introduces bounded compression operations that encourage embeddings to preserve predictive information while suppressing nuisance variability, yielding more informative representations while retaining the stability and compatibility of standard normalization. Theoretically, we prove that IBNorm achieves a higher IB value and tighter generalization bounds than variance-centric methods. Empirically, IBNorm consistently outperforms BatchNorm, LayerNorm, and RMSNorm across large-scale language models (LLaMA, GPT-2) and vision models (ResNet, ViT), with mutual information analysis confirming superior information bottleneck behavior. Code will be released publicly.