Universal Properties of Activation Sparsity in Modern Large Language Models

📅 2025-08-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing studies on LLM activation sparsity heavily rely on the ReLU assumption, limiting their applicability to modern models employing SwiGLU, GeLU, or other non-ReLU activations—resulting in fragmented methodologies and non-generalizable conclusions. Method: This work introduces the first general sparse activation evaluation framework tailored for non-ReLU activations, systematically analyzing input-dependent sparsity patterns in feed-forward network (FFN) layers across mainstream LLMs, including diffusion-based architectures, via robustness-aware empirical analysis. Contribution/Results: We identify cross-model-consistent sparsity distributions and uncover a stable subset of low-activation neurons within FFNs. These findings offer novel insights into model interpretability and yield lightweight compression and inference acceleration principles. Extensive validation across diverse LLM families demonstrates significant efficiency gains—up to 2.1× faster inference and 38% parameter reduction—without compromising task performance.

Technology Category

Application Category

📝 Abstract
Input-dependent activation sparsity is a notable property of deep learning models, which has been extensively studied in networks with ReLU activations and is associated with efficiency, robustness, and interpretability. However, the approaches developed for ReLU-based models depend on exact zero activations and do not transfer directly to modern large language models~(LLMs), which have abandoned ReLU in favor of other activation functions. As a result, current work on activation sparsity in LLMs is fragmented, model-specific, and lacks consensus on which components to target. We propose a general framework to assess sparsity robustness and present a systematic study of the phenomenon in the FFN layers of modern LLMs, including diffusion LLMs. Our findings reveal universal patterns of activation sparsity in LLMs, provide insights into this phenomenon, and offer practical guidelines for exploiting it in model design and acceleration.
Problem

Research questions and friction points this paper is trying to address.

Studying activation sparsity in modern large language models
Developing general framework to assess sparsity robustness
Providing guidelines for exploiting sparsity in model design
Innovation

Methods, ideas, or system contributions that make the work stand out.

General framework for sparsity robustness assessment
Systematic study of activation sparsity patterns
Practical guidelines for model design acceleration
🔎 Similar Papers
No similar papers found.