🤖 AI Summary
To address the degradation of model robustness under continually emerging adversarial attacks, this paper proposes the Continual Robust Training (CRT) framework. CRT theoretically establishes a robustness transfer boundary based on perturbation distances in the logit space—the first such characterization—and designs a lightweight distance regularization mechanism that jointly enhances robustness against both historical and novel attacks without significant computational overhead. The framework is compatible with ℓₚ-based adversarial training and multi-stage fine-tuning, requiring no architectural modifications. Evaluated across over 100 attack combinations on CIFAR-10, CIFAR-100, and ImageNette, CRT achieves substantial improvements in cross-attack robust accuracy while incurring negligible increases in training cost. The implementation is publicly available.
📝 Abstract
Robust training methods typically defend against specific attack types, such as Lp attacks with fixed budgets, and rarely account for the fact that defenders may encounter new attacks over time. A natural solution is to adapt the defended model to new adversaries as they arise via fine-tuning, a method which we call continual robust training (CRT). However, when implemented naively, fine-tuning on new attacks degrades robustness on previous attacks. This raises the question: how can we improve the initial training and fine-tuning of the model to simultaneously achieve robustness against previous and new attacks? We present theoretical results which show that the gap in a model's robustness against different attacks is bounded by how far each attack perturbs a sample in the model's logit space, suggesting that regularizing with respect to this logit space distance can help maintain robustness against previous attacks. Extensive experiments on 3 datasets (CIFAR-10, CIFAR-100, and ImageNette) and over 100 attack combinations demonstrate that the proposed regularization improves robust accuracy with little overhead in training time. Our findings and open-source code lay the groundwork for the deployment of models robust to evolving attacks.