WINA: Weight Informed Neuron Activation for Accelerating Large Language Model Inference

๐Ÿ“… 2025-05-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing training-free sparse activation methods select neurons solely based on hidden-state magnitudes, leading to high approximation error and limited inference accuracy. This paper proposes a training-free, plug-and-play sparse activation framework thatโ€” for the first timeโ€”jointly models the โ„“โ‚‚ norms of linear-layer weight columns and hidden-state magnitudes to derive a theoretically grounded neuron selection criterion. We further establish a tighter theoretical bound on approximation error. The method requires no fine-tuning, architectural modification, or additional training, ensuring broad applicability across architectures. Evaluated on multiple large language models (LLMs) and benchmark datasets, it achieves up to 2.94% higher average accuracy than TEAL at identical sparsity levels. The implementation is publicly available.

Technology Category

Application Category

๐Ÿ“ Abstract
The growing computational demands of large language models (LLMs) make efficient inference and activation strategies increasingly critical. While recent approaches, such as Mixture-of-Experts (MoE), leverage selective activation but require specialized training, training-free sparse activation methods offer broader applicability and superior resource efficiency through their plug-and-play design. However, many existing methods rely solely on hidden state magnitudes to determine activation, resulting in high approximation errors and suboptimal inference accuracy. To address these limitations, we propose WINA (Weight Informed Neuron Activation), a novel, simple, and training-free sparse activation framework that jointly considers hidden state magnitudes and the column-wise $ell_2$-norms of weight matrices. We show that this leads to a sparsification strategy that obtains optimal approximation error bounds with theoretical guarantees tighter than existing techniques. Empirically, WINA also outperforms state-of-the-art methods (e.g., TEAL) by up to $2.94%$ in average performance at the same sparsity levels, across a diverse set of LLM architectures and datasets. These results position WINA as a new performance frontier for training-free sparse activation in LLM inference, advancing training-free sparse activation methods and setting a robust baseline for efficient inference. The source code is available at https://github.com/microsoft/wina.
Problem

Research questions and friction points this paper is trying to address.

Efficient inference in large language models (LLMs) with sparse activation
Reducing approximation errors in training-free activation methods
Improving inference accuracy by jointly considering hidden states and weight norms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses hidden state and weight norms
Training-free sparse activation framework
Optimal approximation error bounds
๐Ÿ”Ž Similar Papers
No similar papers found.