SPEAR: Structured Pruning for Spiking Neural Networks via Synaptic Operation Estimation and Reinforcement Learning

📅 2025-06-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deploying spiking neural networks (SNNs) on resource-constrained neuromorphic hardware remains challenging due to difficulty in satisfying strict synaptic operation (SynOps) constraints during pruning. Method: This paper proposes a structured pruning framework that, for the first time, treats SynOps as a hard constraint in reinforcement learning (RL)-based architecture search. It introduces a lightweight SynOps estimator (LRE) for real-time post-pruning SynOps prediction and a task-adaptive reward (TAR) function that dynamically balances accuracy degradation against constraint satisfaction, thereby preventing constraint violations during search. Contribution/Results: Under stringent SynOps budgets, the framework achieves significant SNN model compression and reduced computational cost while preserving high accuracy. Extensive experiments across multiple benchmark tasks demonstrate its effectiveness for efficient edge deployment on neuromorphic hardware.

Technology Category

Application Category

📝 Abstract
While deep spiking neural networks (SNNs) demonstrate superior performance, their deployment on resource-constrained neuromorphic hardware still remains challenging. Network pruning offers a viable solution by reducing both parameters and synaptic operations (SynOps) to facilitate the edge deployment of SNNs, among which search-based pruning methods search for the SNNs structure after pruning. However, existing search-based methods fail to directly use SynOps as the constraint because it will dynamically change in the searching process, resulting in the final searched network violating the expected SynOps target. In this paper, we introduce a novel SNN pruning framework called SPEAR, which leverages reinforcement learning (RL) technique to directly use SynOps as the searching constraint. To avoid the violation of SynOps requirements, we first propose a SynOps prediction mechanism called LRE to accurately predict the final SynOps after search. Observing SynOps cannot be explicitly calculated and added to constrain the action in RL, we propose a novel reward called TAR to stabilize the searching. Extensive experiments show that our SPEAR framework can effectively compress SNN under specific SynOps constraint.
Problem

Research questions and friction points this paper is trying to address.

Reducing SNN parameters and SynOps for edge deployment
Accurately predicting SynOps to meet target constraints
Stabilizing RL-based search with novel reward mechanism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning for SNN pruning
Predicts SynOps with LRE mechanism
Stabilizes search with TAR reward
🔎 Similar Papers
No similar papers found.
H
Hui Xie
Beihang University
Y
Yuhe Liu
Beihang University
S
Shaoqi Yang
Beihang University
Jinyang Guo
Jinyang Guo
The University of Sydney
Deep LearningEfficient MethodsEdge Computing
Yufei Guo
Yufei Guo
Engineer
nerual networks
Y
Yuqing Ma
Beihang University
J
Jiaxin Chen
Beihang University
J
Jiaheng Liu
Nanjing University
X
Xianglong Liu
Beihang University