ReverB-SNN: Reversing Bit of the Weight and Activation for Spiking Neural Networks

πŸ“… 2025-06-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the limited representational capacity and accuracy degradation caused by binary activations in Spiking Neural Networks (SNNs), this paper proposes a novel β€œweight-quantized, activation-real-valued” paradigm: weights are learned via differentiable binarization to preserve event-driven processing and multiplication-free computation, while activations remain real-valued to enhance feature representation. We introduce a learnable magnitude factor and a training-inference decoupled reparameterization mechanism, enabling joint optimization of accuracy and energy efficiency. Extensive experiments across diverse SNN architectures and both dynamic (e.g., DVS128 Gesture, CIFAR10-DVS, NMNIST) and static datasets demonstrate consistent superiority over state-of-the-art methods, achieving average accuracy gains of 3.2–7.8%. Crucially, the proposed approach retains the intrinsic energy efficiency of SNNs, making it suitable for low-power neuromorphic computing.

Technology Category

Application Category

πŸ“ Abstract
The Spiking Neural Network (SNN), a biologically inspired neural network infrastructure, has garnered significant attention recently. SNNs utilize binary spike activations for efficient information transmission, replacing multiplications with additions, thereby enhancing energy efficiency. However, binary spike activation maps often fail to capture sufficient data information, resulting in reduced accuracy. To address this challenge, we advocate reversing the bit of the weight and activation for SNNs, called extbf{ReverB-SNN}, inspired by recent findings that highlight greater accuracy degradation from quantizing activations compared to weights. Specifically, our method employs real-valued spike activations alongside binary weights in SNNs. This preserves the event-driven and multiplication-free advantages of standard SNNs while enhancing the information capacity of activations. Additionally, we introduce a trainable factor within binary weights to adaptively learn suitable weight amplitudes during training, thereby increasing network capacity. To maintain efficiency akin to vanilla extbf{ReverB-SNN}, our trainable binary weight SNNs are converted back to standard form using a re-parameterization technique during inference. Extensive experiments across various network architectures and datasets, both static and dynamic, demonstrate that our approach consistently outperforms state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Enhancing SNN accuracy by reversing weight and activation bits
Preserving SNN efficiency while improving activation information capacity
Adaptively learning weight amplitudes to increase network capacity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reversing bit of weight and activation
Using real-valued spike activations
Introducing trainable factor in binary weights
πŸ”Ž Similar Papers
No similar papers found.
Yufei Guo
Yufei Guo
Engineer
nerual networks
Y
Yuhan Zhang
Intelligent Science & Technology Academy of CASIC, China
Z
Zhou Jie
X
Xiaode Liu
X
Xin Tong
Yuanpei Chen
Yuanpei Chen
South China University of Technology
Robotic
W
Weihang Peng
Zhe Ma
Zhe Ma