🤖 AI Summary
Transferable adversarial examples pose a severe threat in black-box settings, as they can easily fool diverse deep neural networks. Method: This paper introduces a trigger-activation mechanism, wherein a model outputs reliable predictions only when a fixed trigger τ is embedded into the input—naturally suppressing transfer-based attacks. Contribution/Results: We pioneer the “trigger-activation” paradigm and first reveal the intrinsic robustness of models against transferable adversarial examples under a fixed trigger, providing a first-order gradient-based theoretical explanation. By jointly optimizing a learnable trigger and model parameters—integrating gradient analysis with adversarial training—the method requires no auxiliary detection modules or input preprocessing. Experiments across multiple datasets and diverse transfer attacks demonstrate substantial improvements in robustness while preserving 100% clean-sample accuracy.
📝 Abstract
Adversarial examples, characterized by imperceptible perturbations, pose significant threats to deep neural networks by misleading their predictions. A critical aspect of these examples is their transferability, allowing them to deceive {unseen} models in black-box scenarios. Despite the widespread exploration of defense methods, including those on transferability, they show limitations: inefficient deployment, ineffective defense, and degraded performance on clean images. In this work, we introduce a novel training paradigm aimed at enhancing robustness against transferable adversarial examples (TAEs) in a more efficient and effective way. We propose a model that exhibits random guessing behavior when presented with clean data $oldsymbol{x}$ as input, and generates accurate predictions when with triggered data $oldsymbol{x}+oldsymbol{ au}$. Importantly, the trigger $oldsymbol{ au}$ remains constant for all data instances. We refer to these models as extbf{models with trigger activation}. We are surprised to find that these models exhibit certain robustness against TAEs. Through the consideration of first-order gradients, we provide a theoretical analysis of this robustness. Moreover, through the joint optimization of the learnable trigger and the model, we achieve improved robustness to transferable attacks. Extensive experiments conducted across diverse datasets, evaluating a variety of attacking methods, underscore the effectiveness and superiority of our approach.