Bridging Symmetry and Robustness: On the Role of Equivariance in Enhancing Adversarial Robustness

📅 2025-10-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of adversarial training and its detrimental impact on clean-data accuracy. We propose a symmetry-aware architecture leveraging group-equivariant convolutions (rotation- and scale-equivariant), which enhances robustness without requiring adversarial training. We provide the first systematic theoretical analysis demonstrating that equivariance reduces hypothesis space complexity, regularizes gradient norms, and yields tighter certified robustness bounds within the CLEVER framework. Based on these insights, we design two novel hybrid equivariant-CNN architectures—parallel and cascaded—exploiting symmetry priors to optimize decision boundaries. Evaluated on CIFAR-10, CIFAR-100, and CIFAR-10C, our models achieve significant improvements in robustness against FGSM and PGD attacks while maintaining or even improving clean-data accuracy, thus simultaneously enhancing both adversarial robustness and generalization performance.

Technology Category

Application Category

📝 Abstract
Adversarial examples reveal critical vulnerabilities in deep neural networks by exploiting their sensitivity to imperceptible input perturbations. While adversarial training remains the predominant defense strategy, it often incurs significant computational cost and may compromise clean-data accuracy. In this work, we investigate an architectural approach to adversarial robustness by embedding group-equivariant convolutions-specifically, rotation- and scale-equivariant layers-into standard convolutional neural networks (CNNs). These layers encode symmetry priors that align model behavior with structured transformations in the input space, promoting smoother decision boundaries and greater resilience to adversarial attacks. We propose and evaluate two symmetry-aware architectures: a parallel design that processes standard and equivariant features independently before fusion, and a cascaded design that applies equivariant operations sequentially. Theoretically, we demonstrate that such models reduce hypothesis space complexity, regularize gradients, and yield tighter certified robustness bounds under the CLEVER (Cross Lipschitz Extreme Value for nEtwork Robustness) framework. Empirically, our models consistently improve adversarial robustness and generalization across CIFAR-10, CIFAR-100, and CIFAR-10C under both FGSM and PGD attacks, without requiring adversarial training. These findings underscore the potential of symmetry-enforcing architectures as efficient and principled alternatives to data augmentation-based defenses.
Problem

Research questions and friction points this paper is trying to address.

Enhancing adversarial robustness through group-equivariant convolutional layers
Reducing hypothesis space complexity with symmetry-aware architectures
Improving model resilience without adversarial training requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embedding group-equivariant convolutions into standard CNNs
Proposing parallel and cascaded symmetry-aware architectures
Enhancing robustness without adversarial training via symmetry
L
Longwei Wang
AI Research Lab, Department of Computer Science, University of South Dakota, USA
I
Ifrat Ikhtear Uddin
AI Research Lab, Department of Computer Science, University of South Dakota, USA
K
KC Santosh
AI Research Lab, Department of Computer Science, University of South Dakota, USA
Chaowei Zhang
Chaowei Zhang
Department of Computer Science at Yangzhou University
Natural Language ProcessingData MiningParallel Computing
X
Xiao Qin
Department of Computer Science and Software Engineering, Auburn University, USA
Y
Yang Zhou
Department of Computer Science and Software Engineering, Auburn University, USA