HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds

📅 2023-08-20
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
Spiking neural networks (SNNs) suffer from poor robustness against adversarial attacks. To address this, we propose Homeostatic SNNs (HoSNNs), a novel architecture inspired by neural homeostasis. Our key contribution is the first incorporation of homeostatic regulation principles into SNN modeling, realized via a Threshold-Adaptive Leaky Integrate-and-Fire (TA-LIF) neuron that dynamically adjusts its firing threshold through local feedback to suppress membrane potential perturbations. We theoretically analyze robustness using BIBO stability theory and employ FGSM for pretraining and PGD for adversarial evaluation. Experiments on FashionMNIST, SVHN, CIFAR-10, and CIFAR-100 demonstrate substantial improvements in adversarial robustness, with classification accuracy increasing by 44.37%, 34.62%, 42.07%, and 16.62%, respectively. HoSNN establishes a new paradigm for building robust, brain-inspired computing models.
📝 Abstract
While spiking neural networks (SNNs) offer a promising neurally-inspired model of computation, they are vulnerable to adversarial attacks. We present the first study that draws inspiration from neural homeostasis to design a threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model and utilize TA-LIF neurons to construct the adversarially robust homeostatic SNNs (HoSNNs) for improved robustness. The TA-LIF model incorporates a self-stabilizing dynamic thresholding mechanism, offering a local feedback control solution to the minimization of each neuron's membrane potential error caused by adversarial disturbance. Theoretical analysis demonstrates favorable dynamic properties of TA-LIF neurons in terms of the bounded-input bounded-output stability and suppressed time growth of membrane potential error, underscoring their superior robustness compared with the standard LIF neurons. When trained with weak FGSM attacks (attack budget = 2/255) and tested with much stronger PGD attacks (attack budget = 8/255), our HoSNNs significantly improve model accuracy on several datasets: from 30.54% to 74.91% on FashionMNIST, from 0.44% to 35.06% on SVHN, from 0.56% to 42.63% on CIFAR10, from 0.04% to 16.66% on CIFAR100, over the conventional LIF-based SNNs.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robustness of spiking neural networks against adversarial attacks.
Introducing a threshold-adapting neuron model for improved stability.
Demonstrating superior accuracy under strong adversarial conditions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

TA-LIF neuron model with adaptive thresholds
Self-stabilizing dynamic thresholding mechanism
Improved robustness against adversarial attacks
🔎 Similar Papers
No similar papers found.
Hejia Geng
Hejia Geng
Researcher @ Oxford
P
Peng Li
University of California, Santa Barbara, Santa Barbara, CA 93106