Towards Robust Universal Perturbation Attacks: A Float-Coded, Penalty-Driven Evolutionary Approach

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of generating universal adversarial perturbations (UAPs) that are imperceptible, highly effective in causing misclassification, and generalize across diverse deep learning models. To this end, the authors propose a gradient-free, single-objective evolutionary framework that employs floating-point encoding aligned with the scale of deep neural networks. The framework integrates dynamic evolutionary operators, an adaptive scheduling strategy, and a batch-switching mechanism to jointly enhance perturbation invisibility, transferability, and attack efficiency. Extensive experiments on ImageNet demonstrate that the generated UAPs achieve significantly lower ℓp-norm magnitudes, higher misclassification rates, and faster convergence compared to existing evolutionary-based approaches, thereby establishing a new state of the art in gradient-free universal adversarial attack generation.

Technology Category

Application Category

📝 Abstract
Universal adversarial perturbations (UAPs) have garnered significant attention due to their ability to undermine deep neural networks across multiple inputs using a single noise pattern. Evolutionary algorithms offer a promising approach to generating such perturbations due to their ability to navigate non-convex, gradient-free landscapes. In this work, we introduce a float-coded, penalty-driven single-objective evolutionary framework for UAP generation that achieves lower visibility perturbations while enhancing attack success rates. Our approach leverages continuous gene representations aligned with contemporary deep learning scales, incorporates dynamic evolutionary operators with adaptive scheduling, and utilizes a modular PyTorch implementation for seamless integration with modern architectures. Additionally, we ensure the universality of the generated perturbations by testing across diverse models and by periodically switching batches to prevent overfitting. Experimental results on the ImageNet dataset demonstrate that our framework consistently produces perturbations with smaller norms, higher misclassification effectiveness, and faster convergence compared to existing evolutionary-based methods. These findings highlight the robustness and scalability of our approach for universal adversarial attacks across various deep learning architectures.
Problem

Research questions and friction points this paper is trying to address.

Universal Adversarial Perturbations
Robustness
Attack Success Rate
Visibility
Deep Neural Networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

universal adversarial perturbations
evolutionary algorithm
float-coded representation
penalty-driven optimization
adaptive evolutionary operators