Large Language Models Inference Engines based on Spiking Neural Networks

๐Ÿ“… 2025-09-30
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high energy consumption and poor scalability with sequence length in Transformer inference, as well as the large latency and weak scalability of existing spiking neural network (SNN) language model conversion methods, this paper proposes NeurTransformerโ€”a highly efficient, low-power SNN inference framework for large language models. Its core innovations include: (1) a spike-based self-attention (SSA) mechanism that eliminates reliance on long temporal spike timesteps inherent in conventional SNN conversion; and (2) a hybrid training strategy combining pretrained model weight transfer with supervised fine-tuning, replacing inefficient end-to-end surrogate learning. Evaluated on GPT-2 variants, NeurTransformer achieves only 5โ€“12% similarity loss in inference outputs, reduces perplexity by 9.7%, and cuts self-attention energy consumption by 64.71%โ€“85.28%, significantly improving energy efficiency and deployment feasibility.

Technology Category

Application Category

๐Ÿ“ Abstract
Foundational models based on the transformer architecture are currently the state-of-the-art in general language modeling, as well as in scientific areas such as material science and climate. However, training and deploying these models is computationally challenging as the time and space complexity has a quadratic relation to the input sequence length. Several efforts exploring efficient computational paradigms and model architectures to address these limitations have been made. In this work, we explore spiking neural networks (SNNs) to design transformer models. A challenge in training large-scale SNNs, using existing surrogate learning methods is inefficient and time-consuming. On the other hand, techniques to convert existing transformer-based models to their SNN equivalent are not scalable, as achieving optimal performance comes at the cost of a large number of spike time-steps, i.e. increased latency. To address this, we propose NeurTransformer, a methodology for designing transformer-based SNN for inference using a supervised fine-tuning approach with existing conversion methods. The proposed methodology works by: (1) replacing the self-attention mechanism with a spike-based self-attention (SSA), (2) converting the feed-forward block of the trained transformer model to its equivalent SNN, and (3) fine-tuning the SSA block using SNN-based surrogate learning algorithms. We benchmark the proposed methodology and demonstrate its accuracy and scalability using three variants of the GPT-2 model of increasing model size. We observe that the converted GPT-2 small models demonstrate a 5-12% loss in cosine similarity and a 9.7% reduction in perplexity. Finally, we demonstrate the energy efficiency of the SSA block compared to the ASA block and show between 64.71% and 85.28% reductions in estimated energy consumption when implementing the self-attention mechanism on a digital hardware.
Problem

Research questions and friction points this paper is trying to address.

Reducing quadratic computational complexity of transformer models
Overcoming inefficient training of large-scale spiking neural networks
Minimizing latency in converted transformer-based SNN models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replace self-attention with spike-based self-attention
Convert feed-forward blocks to spiking neural networks
Fine-tune models using SNN-based surrogate learning
๐Ÿ”Ž Similar Papers
No similar papers found.