🤖 AI Summary
In spiking neural networks (SNNs), jointly optimizing key neuronal parameters—specifically the membrane time constant τ and firing threshold vₜₕ—to balance classification accuracy and energy efficiency remains challenging.
Method: We systematically construct and characterize the τ–vₜₕ “operational manifold” via controlled-variable parameter sweeps across multiple datasets and architectures, augmented by spike correlation analysis and adversarial robustness evaluation.
Contribution/Results: We identify a well-defined operational region supporting high accuracy, low spike sparsity, and functional stability; reveal degradation mechanisms arising from parameter boundary violations—including pathological spike synchronization in anomalous regimes; and pinpoint a universally effective optimal operating point. This yields reproducible, transferable parameter tuning guidelines for practical neuromorphic deployment.
📝 Abstract
Spiking Neural Networks (SNNs) offer energy-efficient and biologically plausible alternatives to traditional artificial neural networks, but their performance depends critically on the tuning of neuron model parameters. In this work, we identify and characterize an operational space - a constrained region in the neuron hyperparameter domain (specifically membrane time constant tau and voltage threshold vth) - within which the network exhibits meaningful activity and functional behavior. Operating inside this manifold yields optimal trade-offs between classification accuracy and spiking activity, while stepping outside leads to degeneration: either excessive energy use or complete network silence.
Through systematic exploration across datasets and architectures, we visualize and quantify this manifold and identify efficient operating points. We further assess robustness to adversarial noise, showing that SNNs exhibit increased spike correlation and internal synchrony when operating outside their optimal region. These findings highlight the importance of principled hyperparameter tuning to ensure both task performance and energy efficiency. Our results offer practical guidelines for deploying robust and efficient SNNs, particularly in neuromorphic computing scenarios.