π€ AI Summary
This work identifies a previously unreported βLess is Lessβ paradox in sparse attention mechanisms, where aggressive sparsification during long decoding sequences can lead to information loss, causing generation bloat and increased inference overhead rather than efficiency gains. To address this, the authors propose an adaptive early-stopping mechanism grounded in information gain thresholds, which dynamically halts redundant token generation while preserving output quality. Extensive experiments across multiple reasoning-intensive benchmarks demonstrate that the method reduces token consumption by up to 90% with less than a 2% drop in accuracy, substantially improving the decoding efficiency of large language models without compromising performance.
π Abstract
Large language models (LLMs) demonstrate strong capabilities across a wide range of complex tasks and are increasingly deployed at scale, placing significant demands on inference efficiency. Prior work typically decomposes inference into prefill and decode stages, with the decode stage dominating total latency. To reduce time and memory complexity in the decode stage, a line of work introduces sparse-attention algorithms. In this paper, we show, both empirically and theoretically, that sparse attention can paradoxically increase end-to-end complexity: information loss often induces significantly longer sequences, a phenomenon we term ``Less is Less''(Lil). To mitigate the Lil problem, we propose an early-stopping algorithm that detects the threshold where information loss exceeds information gain during sparse decoding. Our early-stopping algorithm reduces token consumption by up to 90% with a marginal accuracy degradation of less than 2% across reasoning-intensive benchmarks.