๐ค AI Summary
This work addresses the energy efficiency and practicality bottlenecks of spiking neural networks (SNNs) in edge AI, which stem from the high computational cost of deep architectures and the absence of input-adaptive mechanisms. To overcome these limitations, the authors propose SPARQ, a novel framework that introduces dynamic early-exit mechanisms into SNNs for the first time, integrating quantization-aware training, reinforcement learningโdriven adaptive inference path selection, and hardware-friendly sparse computation. Experimental results on MLP, LeNet, and AlexNet demonstrate that SPARQ reduces system energy consumption by over 330ร compared to baseline SNNs, decreases synaptic operations by more than 90%, and achieves up to a 5.15% improvement in accuracy.
๐ Abstract
Spiking neural networks (SNNs) offer inherent energy efficiency due to their event-driven computation model, making them promising for edge AI deployment. However, their practical adoption is limited by the computational overhead of deep architectures and the absence of input-adaptive control. This work presents SPARQ, a unified framework that integrates spiking computation, quantization-aware training, and reinforcement learning-guided early exits for efficient and adaptive inference. Evaluations across MLP, LeNet, and AlexNet architectures demonstrated that the proposed Quantised Dynamic SNNs (QDSNN) consistently outperform conventional SNNs and QSNNs, achieving up to 5.15% higher accuracy over QSNNs, over 330 times lower system energy compared to baseline SNNs, and over 90 percent fewer synaptic operations across different datasets. These results validate SPARQ as a hardware-friendly, energy-efficient solution for real-time AI at the edge.