Continual Adversarial Defense

📅 2023-12-15
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of robustness and catastrophic forgetting of historical knowledge in vision classifiers under dynamically evolving adversarial attacks, this paper proposes the Continual Adversarial Defense (CAD) framework. CAD is the first to unify continual learning, few-shot learning, and ensemble learning, enabling lightweight online model updates and an elastic architecture for rapid, few-sample adaptation to novel attacks. It strictly adheres to four design principles: continual adaptability without catastrophic forgetting, low sample dependency, memory efficiency, and high accuracy on both clean and adversarial examples. Evaluated on a multi-stage benchmark of modern adversarial attacks, CAD significantly outperforms all baselines under extremely low computational budgets and failure costs. Crucially, it preserves strong robustness against previously encountered attacks while achieving state-of-the-art generalized robustness against unseen ones.
📝 Abstract
In response to the rapidly evolving nature of adversarial attacks against visual classifiers on a monthly basis, numerous defenses have been proposed to generalize against as many known attacks as possible. However, designing a defense method that generalizes to all types of attacks is not realistic because the environment in which defense systems operate is dynamic and comprises various unique attacks that emerge as time goes on. A well-matched approach to the dynamic environment lies in a defense system that continuously collects adversarial data online to quickly improve itself. Therefore, we put forward a practical defense deployment against a challenging threat model and propose, for the first time, the Continual Adversarial Defense (CAD) framework that adapts to attack sequences under four principles: (1)~continual adaptation to new attacks without catastrophic forgetting, (2)~few-shot adaptation, (3)~memory-efficient adaptation, and (4)~high accuracy on both clean and adversarial data. We explore and integrate cutting-edge continual learning, few-shot learning, and ensemble learning techniques to qualify the principles. Extensive experiments validate the effectiveness of our approach against multiple stages of modern adversarial attacks and demonstrate significant improvements over numerous baseline methods. In particular, CAD is capable of quickly adapting with minimal budget and a low cost of defense failure while maintaining good performance against previous attacks. Our research sheds light on a brand-new paradigm for continual defense adaptation against dynamic and evolving attacks.
Problem

Research questions and friction points this paper is trying to address.

Develops a defense system adapting to evolving adversarial attacks.
Ensures continual adaptation without forgetting previous attack defenses.
Maintains high accuracy on both clean and adversarial data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continual Adversarial Defense framework adaptation
Integration of continual, few-shot, ensemble learning
Memory-efficient, high-accuracy defense against evolving attacks
🔎 Similar Papers
No similar papers found.
Q
Qian Wang
Huazhong University of Science and Technology, China
Y
Yaoyao Liu
Johns Hopkins University, USA
H
Hefei Ling
Huazhong University of Science and Technology, China
Yingwei Li
Yingwei Li
Research Scientist, Waymo
Computer Vision
Q
Qihao Liu
Johns Hopkins University, USA
P
Ping Li
Huazhong University of Science and Technology, China
J
Jiazhong Chen
Huazhong University of Science and Technology, China
A
Alan L. Yuille
Johns Hopkins University, USA
N
Ning Yu
Netflix Eyeline, USA