Towards Efficient and Accurate Spiking Neural Networks via Adaptive Bit Allocation

๐Ÿ“… 2025-06-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Multi-bit spiking neural networks (SNNs) suffer from rapidly escalating memory and computational overhead as bit-width increases, leading to a deteriorated energyโ€“accuracy trade-off. To address this, we propose the first learnable framework for joint optimization of temporal length and bit-width, featuring an enhanced differentiable spiking neuron model and a gradient-driven step-size update mechanism that enables layer-wise fine-grained adaptive bit allocation. Leveraging quantization-aware modeling and end-to-end differentiable training, our approach effectively mitigates quantization error and gradient mismatch. Extensive experiments on CIFAR, ImageNet, and DVS datasets demonstrate significant improvements: SEWResNet-34 achieves a 2.69% accuracy gain on ImageNet while reducing the bit budget by 4.16ร—, substantially enhancing both energy efficiency and accuracy of SNNs.

Technology Category

Application Category

๐Ÿ“ Abstract
Multi-bit spiking neural networks (SNNs) have recently become a heated research spot, pursuing energy-efficient and high-accurate AI. However, with more bits involved, the associated memory and computation demands escalate to the point where the performance improvements become disproportionate. Based on the insight that different layers demonstrate different importance and extra bits could be wasted and interfering, this paper presents an adaptive bit allocation strategy for direct-trained SNNs, achieving fine-grained layer-wise allocation of memory and computation resources. Thus, SNN's efficiency and accuracy can be improved. Specifically, we parametrize the temporal lengths and the bit widths of weights and spikes, and make them learnable and controllable through gradients. To address the challenges caused by changeable bit widths and temporal lengths, we propose the refined spiking neuron, which can handle different temporal lengths, enable the derivation of gradients for temporal lengths, and suit spike quantization better. In addition, we theoretically formulate the step-size mismatch problem of learnable bit widths, which may incur severe quantization errors to SNN, and accordingly propose the step-size renewal mechanism to alleviate this issue. Experiments on various datasets, including the static CIFAR and ImageNet and the dynamic CIFAR-DVS and DVS-GESTURE, demonstrate that our methods can reduce the overall memory and computation cost while achieving higher accuracy. Particularly, our SEWResNet-34 can achieve a 2.69% accuracy gain and 4.16$ imes$ lower bit budgets over the advanced baseline work on ImageNet. This work will be fully open-sourced.
Problem

Research questions and friction points this paper is trying to address.

Optimizing bit allocation in SNNs for efficiency and accuracy
Reducing memory and computation costs in multi-bit SNNs
Addressing step-size mismatch in learnable bit widths
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive bit allocation for SNN efficiency
Refined spiking neuron handles variable lengths
Step-size renewal reduces quantization errors
๐Ÿ”Ž Similar Papers
No similar papers found.
X
Xingting Yao
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences; School of Future Technology, University of Chinese Academy of Sciences
Q
Qinghao Hu
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Fei Zhou
Fei Zhou
HAUT
deep learningtarget detectionimage processing
T
Tielong Liu
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences; School of Future Technology, University of Chinese Academy of Sciences
G
Gang Li
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
Peisong Wang
Peisong Wang
CASIA
Deep Neural Network Acceleration and Compression
J
Jian Cheng
The Key Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences; School of Future Technology, University of Chinese Academy of Sciences