Resting Neurons, Active Insights: Improving Input Sparsification for Large Language Models

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Input sparsification in large language models (LLMs) induces representational distortion and performance degradation. Method: This work reformulates input sparsification as dynamic structural pruning and introduces, for the first time, trainable spontaneous neuron modules—biologically inspired by neuronal resting discharge—capable of delivering stable compensatory signals under sparse activation to mitigate representation collapse. The approach integrates dynamic input sparsification, activation-stability-driven pruning, and spontaneous neuron compensation. Contribution/Results: It achieves substantial inference acceleration while significantly narrowing the performance gap between sparse and dense models. Empirical evaluation across multiple benchmarks demonstrates strong generalization and high efficiency. Notably, this is the first method to jointly optimize computational efficiency and representation fidelity in input-sparsified LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) achieve state-of-the-art performance across a wide range of applications, but their massive scale poses significant challenges for both efficiency and interpretability. Structural pruning, which reduces model size by removing redundant computational units such as neurons, has been widely explored as a solution, and this study devotes to input sparsification, an increasingly popular technique that improves efficiency by selectively activating only a subset of entry values for each input. However, existing approaches focus primarily on computational savings, often overlooking the representational consequences of sparsification and leaving a noticeable performance gap compared to full models. In this work, we first reinterpret input sparsification as a form of dynamic structural pruning. Motivated by the spontaneous baseline firing rates observed in biological neurons, we introduce a small set of trainable spontaneous neurons that act as compensatory units to stabilize activations in sparsified LLMs. Experiments demonstrate that these auxiliary neurons substantially reduce the sparsification-induced performance gap while generalizing effectively across tasks.
Problem

Research questions and friction points this paper is trying to address.

Improves input sparsification efficiency in LLMs
Reduces performance gap from sparsification via compensatory neurons
Stabilizes activations by mimicking biological neuron firing rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces trainable spontaneous neurons for stabilization
Reinterprets input sparsification as dynamic structural pruning
Reduces performance gap in sparsified large language models
🔎 Similar Papers
No similar papers found.
H
Haotian Xu
Department of Applied Mathematics and Statistics, Stony Brook University
T
Tian Gao
Thomas J. Watson Research Center, IBM Research
Tsui-Wei Weng
Tsui-Wei Weng
UCSD
machine learningdeep learning
Tengfei Ma
Tengfei Ma
Stony Brook University
Natural Language ProcessingMachine LearningHealthcareGraph Neural Networks