🤖 AI Summary
Input sparsification in large language models (LLMs) induces representational distortion and performance degradation. Method: This work reformulates input sparsification as dynamic structural pruning and introduces, for the first time, trainable spontaneous neuron modules—biologically inspired by neuronal resting discharge—capable of delivering stable compensatory signals under sparse activation to mitigate representation collapse. The approach integrates dynamic input sparsification, activation-stability-driven pruning, and spontaneous neuron compensation. Contribution/Results: It achieves substantial inference acceleration while significantly narrowing the performance gap between sparse and dense models. Empirical evaluation across multiple benchmarks demonstrates strong generalization and high efficiency. Notably, this is the first method to jointly optimize computational efficiency and representation fidelity in input-sparsified LLMs.
📝 Abstract
Large Language Models (LLMs) achieve state-of-the-art performance across a wide range of applications, but their massive scale poses significant challenges for both efficiency and interpretability. Structural pruning, which reduces model size by removing redundant computational units such as neurons, has been widely explored as a solution, and this study devotes to input sparsification, an increasingly popular technique that improves efficiency by selectively activating only a subset of entry values for each input. However, existing approaches focus primarily on computational savings, often overlooking the representational consequences of sparsification and leaving a noticeable performance gap compared to full models. In this work, we first reinterpret input sparsification as a form of dynamic structural pruning. Motivated by the spontaneous baseline firing rates observed in biological neurons, we introduce a small set of trainable spontaneous neurons that act as compensatory units to stabilize activations in sparsified LLMs. Experiments demonstrate that these auxiliary neurons substantially reduce the sparsification-induced performance gap while generalizing effectively across tasks.