PASCAL: Precise and Efficient ANN- SNN Conversion using Spike Accumulation and Adaptive Layerwise Activation

📅 2025-05-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high inference latency and accuracy degradation in ANN-to-SNN conversion, this paper proposes a mathematically equivalent low-latency conversion framework. First, it establishes rigorous equivalence between ANNs and SNNs under Quadratic Continuous-Function Spiking (QCFS) activation. Second, it introduces a layer-wise adaptive QCFS quantization strategy that dynamically allocates the minimal necessary number of time steps per layer. Third, it incorporates a spike accumulation mechanism to enhance temporal information utilization efficiency. Evaluated on ImageNet, the converted ResNet-34 SNN achieves ≈74% top-1 accuracy with only 1/64 the inference steps required by prior methods—significantly outperforming existing conversion approaches. The core contribution lies in the synergistic co-optimization of theoretically grounded equivalence modeling and hardware-friendly adaptive quantization, jointly achieving high accuracy and ultra-low latency.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) have been put forward as an energy-efficient alternative to Artificial Neural Networks (ANNs) since they perform sparse Accumulate operations instead of the power-hungry Multiply-and-Accumulate operations. ANN-SNN conversion is a widely used method to realize deep SNNs with accuracy comparable to that of ANNs.~citeauthor{bu2023optimal} recently proposed the Quantization-Clip-Floor-Shift (QCFS) activation as an alternative to ReLU to minimize the accuracy loss during ANN-SNN conversion. Nevertheless, SNN inferencing requires a large number of timesteps to match the accuracy of the source ANN for real-world datasets. In this work, we propose PASCAL, which performs ANN-SNN conversion in such a way that the resulting SNN is mathematically equivalent to an ANN with QCFS-activation, thereby yielding similar accuracy as the source ANN with minimal inference timesteps. In addition, we propose a systematic method to configure the quantization step of QCFS activation in a layerwise manner, which effectively determines the optimal number of timesteps per layer for the converted SNN. Our results show that the ResNet-34 SNN obtained using PASCAL achieves an accuracy of $approx$74% on ImageNet with a 64$ imes$ reduction in the number of inference timesteps compared to existing approaches.
Problem

Research questions and friction points this paper is trying to address.

Minimizes accuracy loss in ANN-SNN conversion
Reduces inference timesteps for efficient SNN operation
Optimizes layerwise activation for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mathematically equivalent ANN-SNN conversion with QCFS
Layerwise adaptive QCFS activation quantization
Minimal inference timesteps for high accuracy
P
Pranav Ramesh
Department of Computer Science and Engineering, Indian Institute of Technology (IIT) Madras
Gopalakrishnan Srinivasan
Gopalakrishnan Srinivasan
Assistant Professor at IIT Madras
RISC-V SoCAI Accelerator ArchitecturesDeep LearningSpiking Neural Networks