A Theoretical Framework for Preventing Class Collapse in Supervised Contrastive Learning

📅 2025-03-11
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Supervised contrastive learning (SupCL) suffers from intra-class collapse—where embeddings of same-class samples become excessively clustered and lack discriminability—due to improper loss weighting. Method: We propose the Simplex-to-Simplex Embedding Model (SSEM), a theoretical framework grounded in simplex geometry that rigorously characterizes the optimal intra-class embedding distribution. It yields interpretable, computationally tractable guidelines for hyperparameter selection. Contribution/Results: Through theoretical modeling, geometric analysis, and loss optimization—validated on synthetic data and real-world benchmarks (CIFAR-10/100, ImageNet-LT)—SSEM significantly alleviates intra-class collapse, enhances embedding compactness, and improves inter-class separability. To our knowledge, this is the first geometry-driven, interpretable mechanism for *preventing* collapse in SupCL, bridging theoretical insight with practical efficacy.

Technology Category

Application Category

📝 Abstract
Supervised contrastive learning (SupCL) has emerged as a prominent approach in representation learning, leveraging both supervised and self-supervised losses. However, achieving an optimal balance between these losses is challenging; failing to do so can lead to class collapse, reducing discrimination among individual embeddings in the same class. In this paper, we present theoretically grounded guidelines for SupCL to prevent class collapse in learned representations. Specifically, we introduce the Simplex-to-Simplex Embedding Model (SSEM), a theoretical framework that models various embedding structures, including all embeddings that minimize the supervised contrastive loss. Through SSEM, we analyze how hyperparameters affect learned representations, offering practical guidelines for hyperparameter selection to mitigate the risk of class collapse. Our theoretical findings are supported by empirical results across synthetic and real-world datasets.
Problem

Research questions and friction points this paper is trying to address.

Prevent class collapse in supervised contrastive learning.
Balance supervised and self-supervised losses effectively.
Provide guidelines for hyperparameter selection to mitigate risks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Simplex-to-Simplex Embedding Model (SSEM)
Analyzes hyperparameters to prevent class collapse
Provides guidelines for supervised contrastive learning
🔎 Similar Papers
No similar papers found.