SAFER-AiD: Saccade-Assisted Foveal-peripheral vision Enhanced Reconstruction for Adversarial Defense

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Adversarial attacks pose severe threats to the practical deployment security of deep learning models. To address this, we propose a lightweight, biologically inspired defense framework grounded in principles of the human visual system. Our method introduces three key innovations: (1) modeling foveal-peripheral non-uniform sampling and saccadic dynamics via reinforcement learning–guided multi-view sampling path planning; (2) integrating predictive coding to enable distortion-free image reconstruction while suppressing adversarial perturbations; and (3) constructing an end-to-end preprocessing module that requires no fine-tuning of downstream classifiers. Extensive experiments on ImageNet demonstrate substantial robustness improvements across multiple mainstream architectures against both white-box (e.g., PGD) and black-box (e.g., AutoAttack) adversaries—reducing average classification error by 28.6% while cutting training overhead by 73%. Crucially, the approach preserves semantic fidelity and maintains clean-data classification accuracy.

Technology Category

Application Category

📝 Abstract
Adversarial attacks significantly challenge the safe deployment of deep learning models, particularly in real-world applications. Traditional defenses often rely on computationally intensive optimization (e.g., adversarial training or data augmentation) to improve robustness, whereas the human visual system achieves inherent robustness to adversarial perturbations through evolved biological mechanisms. We hypothesize that attention guided non-homogeneous sparse sampling and predictive coding plays a key role in this robustness. To test this hypothesis, we propose a novel defense framework incorporating three key biological mechanisms: foveal-peripheral processing, saccadic eye movements, and cortical filling-in. Our approach employs reinforcement learning-guided saccades to selectively capture multiple foveal-peripheral glimpses, which are integrated into a reconstructed image before classification. This biologically inspired preprocessing effectively mitigates adversarial noise, preserves semantic integrity, and notably requires no retraining or fine-tuning of downstream classifiers, enabling seamless integration with existing systems. Experiments on the ImageNet dataset demonstrate that our method improves system robustness across diverse classifiers and attack types, while significantly reducing training overhead compared to both biologically and non-biologically inspired defense techniques.
Problem

Research questions and friction points this paper is trying to address.

Defending deep learning models against adversarial attacks through biological vision mechanisms
Replacing computationally intensive defenses with foveal-peripheral processing and saccadic movements
Enhancing classifier robustness without retraining using selective sparse sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning-guided saccades for selective sampling
Foveal-peripheral glimpses integration for image reconstruction
Biologically inspired preprocessing without retraining classifiers
🔎 Similar Papers
No similar papers found.