$SpikePack$: Enhanced Information Flow in Spiking Neural Networks with High Hardware Compatibility

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address critical challenges in spiking neural networks (SNNs)—namely, severe information loss during spike transmission, poor hardware compatibility, and inefficient GPU utilization—this paper proposes SpikePack, a novel neuron model built upon an enhanced leaky integrate-and-fire (LIF) framework. SpikePack introduces the first *O*(1) time- and space-complexity multi-bit spike packing encoding scheme, which preserves biologically plausible membrane potential leakage and reset dynamics while drastically mitigating information degradation and enabling near-lossless ANN-to-SNN conversion. By synergistically integrating GPU-parallel optimization with SNN-specific serial inference characteristics, SpikePack maintains full compatibility with mainstream ANN architectures. Experiments demonstrate state-of-the-art performance across image classification, detection, and segmentation tasks. FPGA implementation validates its high sparsity, cross-platform efficiency, and significantly improved inference energy efficiency.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) hold promise for energy-efficient, biologically inspired computing. We identify substantial informatio loss during spike transmission, linked to temporal dependencies in traditional Leaky Integrate-and-Fire (LIF) neuron-a key factor potentially limiting SNN performance. Existing SNN architectures also underutilize modern GPUs, constrained by single-bit spike storage and isolated weight-spike operations that restrict computational efficiency. We introduce ${SpikePack}$, a neuron model designed to reduce transmission loss while preserving essential features like membrane potential reset and leaky integration. ${SpikePack}$ achieves constant $mathcal{O}(1)$ time and space complexity, enabling efficient parallel processing on GPUs and also supporting serial inference on existing SNN hardware accelerators. Compatible with standard Artificial Neural Network (ANN) architectures, ${SpikePack}$ facilitates near-lossless ANN-to-SNN conversion across various networks. Experimental results on tasks such as image classification, detection, and segmentation show ${SpikePack}$ achieves significant gains in accuracy and efficiency for both directly trained and converted SNNs over state-of-the-art models. Tests on FPGA-based platforms further confirm cross-platform flexibility, delivering high performance and enhanced sparsity. By enhancing information flow and rethinking SNN-ANN integration, ${SpikePack}$ advances efficient SNN deployment across diverse hardware platforms.
Problem

Research questions and friction points this paper is trying to address.

Spiking Neural Networks
Information Efficiency
GPU Utilization
Innovation

Methods, ideas, or system contributions that make the work stand out.

SpikePack
Spiking Neural Networks (SNNs)
Efficient Information Processing
🔎 Similar Papers
No similar papers found.
G
Guobin Shen
BrainCog Lab, CASIA, School of Future Technology, UCAS
J
Jindong Li
BrainCog Lab, CASIA, School of Artificial Intelligence, UCAS
Tenglong Li
Tenglong Li
Institute of Automation, Chinese Academy of Sciences
Hardware Architecture
Dongcheng Zhao
Dongcheng Zhao
Beijing Institute of AI Safety and Governance
Spiking Neural NetworksEvent Based VisionBrain-inspired AILLM Safety
Y
Yi Zeng
BrainCog Lab, CASIA, School of Future Technology, UCAS, School of Artificial Intelligence, UCAS