Robust Contrastive Learning With Theory Guarantee

📅 2023-11-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adversarial contrastive learning (ACL) methods improve robustness under linear probing but lack a theoretical analysis linking unsupervised contrastive loss to robust supervised performance. Method: This paper establishes, for the first time, a theoretical bridge between these two objectives by deriving generalization bounds and decomposing robust error, thereby identifying the loss components that dominantly govern robustness. Guided by this analysis, we propose a novel self-supervised pretraining objective within the contrastive learning framework. Contribution/Results: We theoretically prove that penalizing a specific loss term significantly reduces robust error. Empirically, our proposed loss variant improves robust accuracy by 2.3–4.1% on benchmarks including ImageNet-C. This work provides both an interpretable theoretical foundation and a practical optimization pathway for robust self-supervised learning.
📝 Abstract
Contrastive learning (CL) is a self-supervised training paradigm that allows us to extract meaningful features without any label information. A typical CL framework is divided into two phases, where it first tries to learn the features from unlabelled data, and then uses those features to train a linear classifier with the labeled data. While a fair amount of existing theoretical works have analyzed how the unsupervised loss in the first phase can support the supervised loss in the second phase, none has examined the connection between the unsupervised loss and the robust supervised loss, which can shed light on how to construct an effective unsupervised loss for the first phase of CL. To fill this gap, our work develops rigorous theories to dissect and identify which components in the unsupervised loss can help improve the robust supervised loss and conduct proper experiments to verify our findings.
Problem

Research questions and friction points this paper is trying to address.

Develops generalization bounds for robust contrastive learning
Identifies unsupervised components improving robust supervised loss
Reveals benign contrastive loss and divergence enhance robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes unsupervised components for robust supervised loss
Identifies benign contrastive loss enhancing adversarial robustness
Uses global divergence between benign and adversarial examples
🔎 Similar Papers
No similar papers found.
N
Ngoc N. Tran
Vanderbilt University
L
Lam Tran
VinAI Research
H
Hoang Phan
New York University
A
Anh-Vu Bui
Monash University
Tung Pham
Tung Pham
Qualcomm AI Research, Vietnam
Variational ApproximationMachine LearningOptimal TransportReasoning
T
Toan Tran
VinAI Research
D
Dinh Q. Phung
Monash University
Trung Le
Trung Le
Faculty of Information Technology, Monash University, Australia
Adversarial Machine LearningGenerative ModelsModel UnlearningModel EditingOptimal Transport