On Using Certified Training towards Empirical Robustness

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Single-step adversarial training suffers from severe catastrophic overfitting and a substantial gap between empirical robustness and certified robustness. Method: This paper empirically demonstrates—for the first time—that certified training (e.g., IBP-based verifiable robustness optimization) effectively mitigates this issue. We propose a novel network over-approximation regularizer that preserves formal robustness while drastically reducing computational overhead, and integrate local linear analysis with customized regularization to enhance empirical robustness. Results: On standard benchmarks (CIFAR-10/100), our method completely avoids catastrophic overfitting under single-step attacks, achieves empirical robustness comparable to multi-step PGD training, and reduces training time by approximately 60%. This work bridges the performance gap between certified and empirical defenses, establishing a new paradigm for efficient and practical robust training.

Technology Category

Application Category

📝 Abstract
Adversarial training is arguably the most popular way to provide empirical robustness against specific adversarial examples. While variants based on multi-step attacks incur significant computational overhead, single-step variants are vulnerable to a failure mode known as catastrophic overfitting, which hinders their practical utility for large perturbations. A parallel line of work, certified training, has focused on producing networks amenable to formal guarantees of robustness against any possible attack. However, the wide gap between the best-performing empirical and certified defenses has severely limited the applicability of the latter. Inspired by recent developments in certified training, which rely on a combination of adversarial attacks with network over-approximations, and by the connections between local linearity and catastrophic overfitting, we present experimental evidence on the practical utility and limitations of using certified training towards empirical robustness. We show that, when tuned for the purpose, a recent certified training algorithm can prevent catastrophic overfitting on single-step attacks, and that it can bridge the gap to multi-step baselines under appropriate experimental settings. Finally, we present a novel regularizer for network over-approximations that can achieve similar effects while markedly reducing runtime.
Problem

Research questions and friction points this paper is trying to address.

Bridging gap between empirical and certified robustness defenses
Preventing catastrophic overfitting in single-step adversarial training
Reducing runtime of network over-approximation regularizers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines adversarial attacks with network over-approximations
Prevents catastrophic overfitting in single-step attacks
Introduces simple regularizer for faster runtime
🔎 Similar Papers
No similar papers found.
A
A. Palma
Inria, École Normale Supérieure, PSL University, CNRS
S
Serge Durand
Université Paris-Saclay, CEA, List
Zakaria Chihani
Zakaria Chihani
CEA
Theorem provingverification and validationfoundational logicproof certification
F
Francçois Terrier
Université Paris-Saclay, CEA, List
Caterina Urban
Caterina Urban
Inria, École Normale Supérieure, PSL University, CNRS