Adversarially Robust Spiking Neural Networks with Sparse Connectivity

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of simultaneously achieving adversarial robustness, memory efficiency, and energy efficiency for deep neural networks on resource-constrained embedded systems, this paper proposes a novel sparse spiking neural network (SNN) construction method. Our approach uniquely enables *co-transfer* of both adversarial robustness and structural sparsity from robust artificial neural networks (ANNs): we first obtain a robust and sparse ANN via adversarial training, then design a new conversion algorithm to map its sparse connectivity and weights to an SNN, incorporating sparse connection modeling and lightweight adversarial fine-tuning. Experiments demonstrate that, compared to dense SNNs, our method maintains high clean accuracy and significantly improves adversarial robustness—e.g., +23.5% robust accuracy against PGD attacks on CIFAR-10—while reducing weight storage by 100× and improving energy efficiency by 8.6×.

Technology Category

Application Category

📝 Abstract
Deployment of deep neural networks in resource-constrained embedded systems requires innovative algorithmic solutions to facilitate their energy and memory efficiency. To further ensure the reliability of these systems against malicious actors, recent works have extensively studied adversarial robustness of existing architectures. Our work focuses on the intersection of adversarial robustness, memory- and energy-efficiency in neural networks. We introduce a neural network conversion algorithm designed to produce sparse and adversarially robust spiking neural networks (SNNs) by leveraging the sparse connectivity and weights from a robustly pretrained artificial neural network (ANN). Our approach combines the energy-efficient architecture of SNNs with a novel conversion algorithm, leading to state-of-the-art performance with enhanced energy and memory efficiency through sparse connectivity and activations. Our models are shown to achieve up to 100x reduction in the number of weights to be stored in memory, with an estimated 8.6x increase in energy efficiency compared to dense SNNs, while maintaining high performance and robustness against adversarial threats.
Problem

Research questions and friction points this paper is trying to address.

Enhancing energy and memory efficiency in neural networks
Ensuring adversarial robustness in resource-constrained systems
Converting dense ANNs to sparse SNNs for efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Converts ANNs to sparse robust SNNs
Leverages sparse connectivity for efficiency
Enhances energy and memory efficiency
🔎 Similar Papers
M
Mathias Schmolli
Institute of Machine Learning and Neural Computation, Graz University of Technology, Austria
M
Maximilian Baronig
Institute of Machine Learning and Neural Computation, Graz University of Technology, Austria; TU Graz - SAL Dependable Embedded Systems Lab, Silicon Austria Labs, Austria
Robert Legenstein
Robert Legenstein
Institute for Theoretical Computer Science, Graz University of Technology
Computational Neuroscience
Ozan Özdenizci
Ozan Özdenizci
Graz University of Technology
Machine LearningArtificial Intelligence