🤖 AI Summary
Single-step adversarial training suffers from severe catastrophic overfitting and a substantial gap between empirical robustness and certified robustness. Method: This paper empirically demonstrates—for the first time—that certified training (e.g., IBP-based verifiable robustness optimization) effectively mitigates this issue. We propose a novel network over-approximation regularizer that preserves formal robustness while drastically reducing computational overhead, and integrate local linear analysis with customized regularization to enhance empirical robustness. Results: On standard benchmarks (CIFAR-10/100), our method completely avoids catastrophic overfitting under single-step attacks, achieves empirical robustness comparable to multi-step PGD training, and reduces training time by approximately 60%. This work bridges the performance gap between certified and empirical defenses, establishing a new paradigm for efficient and practical robust training.
📝 Abstract
Adversarial training is arguably the most popular way to provide empirical robustness against specific adversarial examples. While variants based on multi-step attacks incur significant computational overhead, single-step variants are vulnerable to a failure mode known as catastrophic overfitting, which hinders their practical utility for large perturbations. A parallel line of work, certified training, has focused on producing networks amenable to formal guarantees of robustness against any possible attack. However, the wide gap between the best-performing empirical and certified defenses has severely limited the applicability of the latter. Inspired by recent developments in certified training, which rely on a combination of adversarial attacks with network over-approximations, and by the connections between local linearity and catastrophic overfitting, we present experimental evidence on the practical utility and limitations of using certified training towards empirical robustness. We show that, when tuned for the purpose, a recent certified training algorithm can prevent catastrophic overfitting on single-step attacks, and that it can bridge the gap to multi-step baselines under appropriate experimental settings. Finally, we present a novel regularizer for network over-approximations that can achieve similar effects while markedly reducing runtime.