🤖 AI Summary
This work addresses the problem of defending against test-time adversarial attacks without model retraining or architectural modification. The proposed method leverages latent-variable ensembling grounded in stochastic resonance theory: it generates multi-view features via small random translations of the input image, followed by feature alignment and aggregation to enable closed-form robust inference—establishing an architecture-agnostic and attack-agnostic “noise-to-counter-noise” defense paradigm. Crucially, this is the first work to extend test-time defense to dense prediction tasks, including stereo matching and optical flow estimation. Empirical evaluation demonstrates substantial robustness gains: the method recovers 68.1% of performance degradation on image classification, 71.9% on stereo matching, and 29.2% on optical flow under adversarial perturbations. These results underscore its effectiveness across diverse vision tasks while preserving model integrity and inference efficiency.
📝 Abstract
We propose a test-time defense mechanism against adversarial attacks: imperceptible image perturbations that significantly alter the predictions of a model. Unlike existing methods that rely on feature filtering or smoothing, which can lead to information loss, we propose to "combat noise with noise" by leveraging stochastic resonance to enhance robustness while minimizing information loss. Our approach introduces small translational perturbations to the input image, aligns the transformed feature embeddings, and aggregates them before mapping back to the original reference image. This can be expressed in a closed-form formula, which can be deployed on diverse existing network architectures without introducing additional network modules or fine-tuning for specific attack types. The resulting method is entirely training-free, architecture-agnostic, and attack-agnostic. Empirical results show state-of-the-art robustness on image classification and, for the first time, establish a generic test-time defense for dense prediction tasks, including stereo matching and optical flow, highlighting the method's versatility and practicality. Specifically, relative to clean (unperturbed) performance, our method recovers up to 68.1% of the accuracy loss on image classification, 71.9% on stereo matching, and 29.2% on optical flow under various types of adversarial attacks.