Towards Model Resistant to Transferable Adversarial Examples via Trigger Activation

📅 2025-04-20
🏛️ IEEE Transactions on Information Forensics and Security
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Transferable adversarial examples pose a severe threat in black-box settings, as they can easily fool diverse deep neural networks. Method: This paper introduces a trigger-activation mechanism, wherein a model outputs reliable predictions only when a fixed trigger τ is embedded into the input—naturally suppressing transfer-based attacks. Contribution/Results: We pioneer the “trigger-activation” paradigm and first reveal the intrinsic robustness of models against transferable adversarial examples under a fixed trigger, providing a first-order gradient-based theoretical explanation. By jointly optimizing a learnable trigger and model parameters—integrating gradient analysis with adversarial training—the method requires no auxiliary detection modules or input preprocessing. Experiments across multiple datasets and diverse transfer attacks demonstrate substantial improvements in robustness while preserving 100% clean-sample accuracy.

Technology Category

Application Category

📝 Abstract
Adversarial examples, characterized by imperceptible perturbations, pose significant threats to deep neural networks by misleading their predictions. A critical aspect of these examples is their transferability, allowing them to deceive {unseen} models in black-box scenarios. Despite the widespread exploration of defense methods, including those on transferability, they show limitations: inefficient deployment, ineffective defense, and degraded performance on clean images. In this work, we introduce a novel training paradigm aimed at enhancing robustness against transferable adversarial examples (TAEs) in a more efficient and effective way. We propose a model that exhibits random guessing behavior when presented with clean data $oldsymbol{x}$ as input, and generates accurate predictions when with triggered data $oldsymbol{x}+oldsymbol{ au}$. Importantly, the trigger $oldsymbol{ au}$ remains constant for all data instances. We refer to these models as extbf{models with trigger activation}. We are surprised to find that these models exhibit certain robustness against TAEs. Through the consideration of first-order gradients, we provide a theoretical analysis of this robustness. Moreover, through the joint optimization of the learnable trigger and the model, we achieve improved robustness to transferable attacks. Extensive experiments conducted across diverse datasets, evaluating a variety of attacking methods, underscore the effectiveness and superiority of our approach.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robustness against transferable adversarial examples
Addressing inefficiency and ineffectiveness in existing defenses
Maintaining performance on clean images during defense
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trigger activation for adversarial robustness
Joint optimization of trigger and model
Random guessing on clean data
🔎 Similar Papers
2024-07-30European Conference on Computer VisionCitations: 1
Y
Yi Yu
Rapid-Rich Object Search (ROSE) Lab, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore
Song Xia
Song Xia
NTU
Machine Learning
Xun Lin
Xun Lin
Postdoc, CUHK; PhD, Beihang University
Subtle Visual ComputingMedia Security
C
Chenqi Kong
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Wenhan Yang
Wenhan Yang
P.hD. student of Computer Science, University of California, Los Angeles
Self-supervised LearningModel Robustness
Shijian Lu
Shijian Lu
College of Computing and Data Science, NTU
Image and video analyticscomputer visionmachine learning
Y
Yap-Peng Tan
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
A
A. Kot
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore