SAFA-SNN: Sparsity-Aware On-Device Few-Shot Class-Incremental Learning with Fast-Adaptive Structure of Spiking Neural Network

๐Ÿ“… 2025-10-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Few-shot class-incremental learning (FSCIL) on edge devices faces critical challenges including data scarcity, stringent resource constraints, and catastrophic forgetting. To address these, this work introduces the first spiking neural network (SNN)-based framework for on-device FSCILโ€”designed to be sparse-aware and rapidly adaptive. We propose a sparse-conditioned neuronal dynamics mechanism and a subspace feature projection strategy to jointly mitigate overfitting and forgetting. To handle the non-differentiability of spike events, we adopt zeroth-order optimization, enabling event-driven sparse computation. The framework is hardware-efficient and deployable on neuromorphic chips with ultra-low power consumption. Evaluated on five benchmark datasets, it significantly outperforms state-of-the-art methods: on Mini-ImageNet, it achieves โ‰ฅ4.01% higher accuracy in the final incremental stage while reducing energy consumption by 20%. These results demonstrate superior energy efficiency and strong generalization capability under severe resource and data constraints.

Technology Category

Application Category

๐Ÿ“ Abstract
Continuous learning of novel classes is crucial for edge devices to preserve data privacy and maintain reliable performance in dynamic environments. However, the scenario becomes particularly challenging when data samples are insufficient, requiring on-device few-shot class-incremental learning (FSCIL) to maintain consistent model performance. Although existing work has explored parameter-efficient FSCIL frameworks based on artificial neural networks (ANNs), their deployment is still fundamentally constrained by limited device resources. Inspired by neural mechanisms, Spiking neural networks (SNNs) process spatiotemporal information efficiently, offering lower energy consumption, greater biological plausibility, and compatibility with neuromorphic hardware than ANNs. In this work, we present an SNN-based method for On-Device FSCIL, i.e., Sparsity-Aware and Fast Adaptive SNN (SAFA-SNN). We first propose sparsity-conditioned neuronal dynamics, in which most neurons remain stable while a subset stays active, thereby mitigating catastrophic forgetting. To further cope with spike non-differentiability in gradient estimation, we employ zeroth-order optimization. Moreover, during incremental learning sessions, we enhance the discriminability of new classes through subspace projection, which alleviates overfitting to novel classes. Extensive experiments conducted on two standard benchmark datasets (CIFAR100 and Mini-ImageNet) and three neuromorphic datasets (CIFAR-10-DVS, DVS128gesture, and N-Caltech101) demonstrate that SAFA-SNN outperforms baseline methods, specifically achieving at least 4.01% improvement at the last incremental session on Mini-ImageNet and 20% lower energy cost over baseline methods with practical implementation.
Problem

Research questions and friction points this paper is trying to address.

Enabling continuous learning of novel classes on edge devices
Addressing few-shot class-incremental learning with limited data samples
Reducing energy consumption while maintaining model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparsity-conditioned neuronal dynamics mitigates catastrophic forgetting
Zeroth-order optimization handles spike non-differentiability
Subspace projection enhances discriminability of new classes
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Huijing Zhang
Zhejiang University
M
Muyang Cao
Zhejiang University
Linshan Jiang
Linshan Jiang
Research Fellow, Institute of Data Science (IDS), NUS
Privacy_preserving_Machine_learningCollaborative Machine LearningEdge-Cloud CollaborationWeb3
X
Xin Du
Zhejiang University
D
Di Yu
Zhejiang University
C
Changze Lv
Fudan University
S
Shuiguang Deng
Zhejiang University