🤖 AI Summary
Class imbalance significantly degrades the performance of contrastive learning, yet the underlying mechanisms by which it affects training dynamics and induces representation bias remain theoretically underexplored. This work addresses this gap by introducing a novel theoretical framework that analyzes the training process of Transformer-based contrastive learning under imbalanced data through the lens of neuron weight evolution. The analysis reveals three characteristic phases in the evolution of neuronal weights during training. Guided by these theoretical insights, the authors propose a targeted neuron pruning strategy that effectively mitigates representation bias. Experimental results demonstrate that the proposed method substantially enhances feature separability and overall representation quality in imbalanced scenarios.
📝 Abstract
Contrastive learning has emerged as a powerful framework for learning generalizable representations, yet its theoretical understanding remains limited, particularly under imbalanced data distributions that are prevalent in real-world applications. Such an imbalance can degrade representation quality and induce biased model behavior, yet a rigorous characterization of these effects is lacking. In this work, we develop a theoretical framework to analyze the training dynamics of contrastive learning with Transformer-based encoders under imbalanced data. Our results reveal that neuron weights evolve through three distinct stages of training, with different dynamics for majority features, minority features, and noise. We further show that minority features reduce representational capacity, increase the need for more complex architectures, and hinder the separation of ground-truth features from noise. Inspired by these neuron-level behaviors, we show that pruning restores performance degraded by imbalance and enhances feature separation, offering both conceptual insights and practical guidance. Major theoretical findings are validated through numerical experiments.