SpikingMamba: Towards Energy-Efficient Large Language Models via Knowledge Distillation from Mamba

๐Ÿ“… 2025-10-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the challenge of simultaneously improving energy efficiency and preserving performance in spiking neural network (SNN)-based large language models (LLMs), this paper proposes SpikingMambaโ€”a highly energy-efficient sparsified LLM. Methodologically, we introduce TI-LIF, a ternary integer-valued leaky integrate-and-fire neuron, coupled with a smooth gradient compensation pathway to preserve semantic polarity and mitigate quantization-induced degradation. Furthermore, we employ single-stage knowledge distillation augmented with reinforcement learning to transfer zero-shot capabilities from Mamba without full pretraining, while incorporating sparse spike activations. Experiments demonstrate that SpikingMamba-1.3B achieves a 4.76ร— energy efficiency gain over its non-spiking counterpart while retaining strong zero-shot generalization. Its initial zero-shot accuracy drops by only 4.78%; after RL-based fine-tuning, it improves by an additional 2.55%, substantially outperforming existing spiking language models.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Models (LLMs) have achieved remarkable performance across tasks but remain energy-intensive due to dense matrix operations. Spiking neural networks (SNNs) improve energy efficiency by replacing dense matrix multiplications with sparse accumulations. Their sparse spike activity enables efficient LLMs deployment on edge devices. However, prior SNN-based LLMs often sacrifice performance for efficiency, and recovering accuracy typically requires full pretraining, which is costly and impractical. To address this, we propose SpikingMamba, an energy-efficient SNN-based LLMs distilled from Mamba that improves energy efficiency with minimal accuracy sacrifice. SpikingMamba integrates two key components: (a) TI-LIF, a ternary-integer spiking neuron that preserves semantic polarity through signed multi-level spike representations. (b) A training-exclusive Smoothed Gradient Compensation (SGC) path mitigating quantization loss while preserving spike-driven efficiency. We employ a single-stage distillation strategy to transfer the zero-shot ability of pretrained Mamba and further enhance it via reinforcement learning (RL). Experiments show that SpikingMamba-1.3B achieves a 4.76$ imes$ energy benefit, with only a 4.78% zero-shot accuracy gap compared to the original Mamba, and achieves a further 2.55% accuracy improvement after RL.
Problem

Research questions and friction points this paper is trying to address.

Developing energy-efficient spiking neural network large language models
Reducing accuracy loss in SNN-based LLMs without full pretraining
Enabling efficient deployment of LLMs on edge devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ternary-integer spiking neuron preserves semantic polarity
Smoothed gradient compensation mitigates quantization loss
Single-stage distillation transfers Mamba's zero-shot ability
๐Ÿ”Ž Similar Papers
No similar papers found.
Yulong Huang
Yulong Huang
The Hong Kong University of Science and Technology (Guangzhou)
J
Jianxiong Tang
Department of Computer Science, City University of Hong Kong
C
Chao Wang
Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen
Z
Ziyi Wang
School of Computer Science and Technology, East China Normal University, Shanghai
J
Jianguo Zhang
Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen
Zhichao Lu
Zhichao Lu
City University of Hong Kong
Evolutionary ComputationBilevel OptimizationNeural Architecture Search
B
Bojun Cheng
The Hong Kong University of Science and Technology (Guangzhou)
L
Luziwei Leng
ACSLab, Huawei Technologies Co., Ltd., Shenzhen