Learning from Peers: Collaborative Ensemble Adversarial Training

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing ensemble adversarial training (EAT) methods train constituent models independently, neglecting their collaborative potential and thereby limiting robustness gains. To address this, we propose Collaborative Ensemble Adversarial Training (CEAT), the first EAT framework that dynamically weights adversarial examples based on inter-model prediction discrepancies—thereby steering training toward high-divergence, hard samples. Furthermore, CEAT introduces calibrated distance regularization to explicitly balance output distribution consistency and diversity across ensemble members. Crucially, CEAT requires no architectural modifications to base models, ensuring model-agnosticism and strong generalizability. Extensive experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate that CEAT consistently outperforms state-of-the-art EAT methods, achieving average robust accuracy improvements of 2.3–4.1 percentage points under PGD-20 and AutoAttack. These results establish CEAT as the new leading approach in ensemble adversarial training.

Technology Category

Application Category

📝 Abstract
Ensemble Adversarial Training (EAT) attempts to enhance the robustness of models against adversarial attacks by leveraging multiple models. However, current EAT strategies tend to train the sub-models independently, ignoring the cooperative benefits between sub-models. Through detailed inspections of the process of EAT, we find that that samples with classification disparities between sub-models are close to the decision boundary of ensemble, exerting greater influence on the robustness of ensemble. To this end, we propose a novel yet efficient Collaborative Ensemble Adversarial Training (CEAT), to highlight the cooperative learning among sub-models in the ensemble. To be specific, samples with larger predictive disparities between the sub-models will receive greater attention during the adversarial training of the other sub-models. CEAT leverages the probability disparities to adaptively assign weights to different samples, by incorporating a calibrating distance regularization. Extensive experiments on widely-adopted datasets show that our proposed method achieves the state-of-the-art performance over competitive EAT methods. It is noteworthy that CEAT is model-agnostic, which can be seamlessly adapted into various ensemble methods with flexible applicability.
Problem

Research questions and friction points this paper is trying to address.

Enhancing model robustness against adversarial attacks
Addressing ignored cooperative benefits in ensemble training
Focusing on samples near decision boundaries for robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collaborative Ensemble Adversarial Training for robustness
Leverages predictive disparities to weight samples
Model-agnostic method with calibrating distance regularization
🔎 Similar Papers
No similar papers found.
D
Dengjin Li
Laboratory of Big Data and Decision, National University of Defense Technology, Changsha, Hunan, China
Yanming Guo
Yanming Guo
National University of Defense Technology
deep learningcomputer vision
Y
Yuxiang Xie
Laboratory of Big Data and Decision, National University of Defense Technology, Changsha, Hunan, China
Z
Zheng Li
Laboratory of Big Data and Decision, National University of Defense Technology, Changsha, Hunan, China
J
Jiangming Chen
Laboratory of Big Data and Decision, National University of Defense Technology, Changsha, Hunan, China
X
Xiaolong Li
Laboratory of Big Data and Decision, National University of Defense Technology, Changsha, Hunan, China
M
Mingrui Lao
Laboratory of Big Data and Decision, National University of Defense Technology, Changsha, Hunan, China