Lil: Less is Less When Applying Post-Training Sparse-Attention Algorithms in Long-Decode Stage

πŸ“… 2026-01-06
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work identifies a previously unreported β€œLess is Less” paradox in sparse attention mechanisms, where aggressive sparsification during long decoding sequences can lead to information loss, causing generation bloat and increased inference overhead rather than efficiency gains. To address this, the authors propose an adaptive early-stopping mechanism grounded in information gain thresholds, which dynamically halts redundant token generation while preserving output quality. Extensive experiments across multiple reasoning-intensive benchmarks demonstrate that the method reduces token consumption by up to 90% with less than a 2% drop in accuracy, substantially improving the decoding efficiency of large language models without compromising performance.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) demonstrate strong capabilities across a wide range of complex tasks and are increasingly deployed at scale, placing significant demands on inference efficiency. Prior work typically decomposes inference into prefill and decode stages, with the decode stage dominating total latency. To reduce time and memory complexity in the decode stage, a line of work introduces sparse-attention algorithms. In this paper, we show, both empirically and theoretically, that sparse attention can paradoxically increase end-to-end complexity: information loss often induces significantly longer sequences, a phenomenon we term ``Less is Less''(Lil). To mitigate the Lil problem, we propose an early-stopping algorithm that detects the threshold where information loss exceeds information gain during sparse decoding. Our early-stopping algorithm reduces token consumption by up to 90% with a marginal accuracy degradation of less than 2% across reasoning-intensive benchmarks.
Problem

Research questions and friction points this paper is trying to address.

sparse attention
decode stage
information loss
inference efficiency
sequence length
Innovation

Methods, ideas, or system contributions that make the work stand out.

sparse attention
decode stage
early-stopping
information loss
inference efficiency
πŸ”Ž Similar Papers
No similar papers found.
Junhao Hu
Junhao Hu
Peking University
LLM systemsLLM applications
F
Fangze Li
School of Computer Science, Nanjing University, Nanjing, China
M
Mingtao Xu
Tencent, Shenzhen, China
F
Feifan Meng
School of Computer Science, Nanjing University, Nanjing, China
S
Shiju Zhao
School of Computer Science, Nanjing University, Nanjing, China
Tiancheng Hu
Tiancheng Hu
University of Cambridge
natural language processingcomputational social science
T
Ting Peng
Tencent, Shenzhen, China
A
Anmin Liu
SCS, Peking University, Beijing, China
W
Wenrui Huang
School of Computer Science, Nanjing University, Nanjing, China
Chenxu Liu
Chenxu Liu
EECS, Peking University
GUI TestingComputer VisionSoftware Testing
Z
Ziyue Hua
SCS, Peking University, Beijing, China
Tao Xie
Tao Xie
Peking University Chair Professor, Fudan University Adjunct Top-Talent Professor
Software EngineeringSoftware TestingSoftware AnalyticsMining Software Repositories