On the Privacy-Preserving Properties of Spiking Neural Networks with Unique Surrogate Gradients and Quantization Levels

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Spiking neural networks (SNNs) deployed on sensitive data face growing privacy risks from membership inference attacks (MIAs), yet the interplay between privacy and utility under common training techniques remains poorly understood. Method: This work systematically investigates how quantization and surrogate gradient design affect the privacy–accuracy trade-off in SNNs. We propose an ROC-AUC–based MIA evaluation framework and benchmark five surrogate gradients—including spike rate escape and arctangent—as well as weight and activation quantization schemes. Results: We find, for the first time, that quantization significantly enhances SNN privacy with negligible accuracy degradation; full-precision SNNs still outperform quantized artificial neural networks (ANNs); and the spike rate escape surrogate gradient achieves the optimal balance between privacy preservation and model performance. Our results establish quantization as an effective privacy-enhancing regularizer for SNNs and identify surrogate gradient selection as a critical design degree of freedom for tuning the privacy–accuracy trade-off.

Technology Category

Application Category

📝 Abstract
As machine learning models increasingly process sensitive data, understanding their vulnerability to privacy attacks is vital. Membership inference attacks (MIAs) exploit model responses to infer whether specific data points were used during training, posing a significant privacy risk. Prior research suggests that spiking neural networks (SNNs), which rely on event-driven computation and discrete spike-based encoding, exhibit greater resilience to MIAs than artificial neural networks (ANNs). This resilience stems from their non-differentiable activations and inherent stochasticity, which obscure the correlation between model responses and individual training samples. To enhance privacy in SNNs, we explore two techniques: quantization and surrogate gradients. Quantization, which reduces precision to limit information leakage, has improved privacy in ANNs. Given SNNs' sparse and irregular activations, quantization may further disrupt the activation patterns exploited by MIAs. We assess the vulnerability of SNNs and ANNs under weight and activation quantization across multiple datasets, using the attack model's receiver operating characteristic (ROC) curve area under the curve (AUC) metric, where lower values indicate stronger privacy, and evaluate the privacy-accuracy trade-off. Our findings show that quantization enhances privacy in both architectures with minimal performance loss, though full-precision SNNs remain more resilient than quantized ANNs. Additionally, we examine the impact of surrogate gradients on privacy in SNNs. Among five evaluated gradients, spike rate escape provides the best privacy-accuracy trade-off, while arctangent increases vulnerability to MIAs. These results reinforce SNNs' inherent privacy advantages and demonstrate that quantization and surrogate gradient selection significantly influence privacy-accuracy trade-offs in SNNs.
Problem

Research questions and friction points this paper is trying to address.

Enhance privacy in Spiking Neural Networks
Assess vulnerability to Membership Inference Attacks
Evaluate impact of quantization and surrogate gradients
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantization enhances SNN privacy
Surrogate gradients optimize privacy-accuracy trade-offs
SNNs inherently resist membership inference attacks
🔎 Similar Papers
No similar papers found.