🤖 AI Summary
To address critical challenges in spiking neural networks (SNNs)—namely, severe information loss during spike transmission, poor hardware compatibility, and inefficient GPU utilization—this paper proposes SpikePack, a novel neuron model built upon an enhanced leaky integrate-and-fire (LIF) framework. SpikePack introduces the first *O*(1) time- and space-complexity multi-bit spike packing encoding scheme, which preserves biologically plausible membrane potential leakage and reset dynamics while drastically mitigating information degradation and enabling near-lossless ANN-to-SNN conversion. By synergistically integrating GPU-parallel optimization with SNN-specific serial inference characteristics, SpikePack maintains full compatibility with mainstream ANN architectures. Experiments demonstrate state-of-the-art performance across image classification, detection, and segmentation tasks. FPGA implementation validates its high sparsity, cross-platform efficiency, and significantly improved inference energy efficiency.
📝 Abstract
Spiking Neural Networks (SNNs) hold promise for energy-efficient, biologically inspired computing. We identify substantial informatio loss during spike transmission, linked to temporal dependencies in traditional Leaky Integrate-and-Fire (LIF) neuron-a key factor potentially limiting SNN performance. Existing SNN architectures also underutilize modern GPUs, constrained by single-bit spike storage and isolated weight-spike operations that restrict computational efficiency. We introduce ${SpikePack}$, a neuron model designed to reduce transmission loss while preserving essential features like membrane potential reset and leaky integration. ${SpikePack}$ achieves constant $mathcal{O}(1)$ time and space complexity, enabling efficient parallel processing on GPUs and also supporting serial inference on existing SNN hardware accelerators. Compatible with standard Artificial Neural Network (ANN) architectures, ${SpikePack}$ facilitates near-lossless ANN-to-SNN conversion across various networks. Experimental results on tasks such as image classification, detection, and segmentation show ${SpikePack}$ achieves significant gains in accuracy and efficiency for both directly trained and converted SNNs over state-of-the-art models. Tests on FPGA-based platforms further confirm cross-platform flexibility, delivering high performance and enhanced sparsity. By enhancing information flow and rethinking SNN-ANN integration, ${SpikePack}$ advances efficient SNN deployment across diverse hardware platforms.