Efficient Long-Decoding Inference with Reasoning-Aware Attention Sparsity

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from O(N) time and memory complexity during mathematical and programming reasoning due to long chain-of-thought (CoT) generation, hindering efficient inference. Method: This paper proposes Reasoning-Aware Adaptive Sparsity (RaaS), the first attention mechanism explicitly designed for reasoning decoding. RaaS identifies dynamic “milestone tokens”—key reasoning steps exhibiting characteristic lifecycle patterns—and introduces three core components: (i) attention-driven milestone detection, (ii) reasoning-path-aware sparse attention scheduling, and (iii) dynamic KV cache management. Contribution/Results: RaaS reduces computational and memory complexity from O(N) to O(L), where L ≪ N, breaking the “impossibility triangle” among accuracy, latency, and memory footprint. Extensive experiments demonstrate that RaaS achieves lossless accuracy while significantly outperforming state-of-the-art methods (e.g., Quest) across multiple reasoning benchmarks, enabling efficient, high-fidelity CoT inference.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated strong capabilities across various domains, with recent advancements in challenging reasoning tasks such as mathematics and programming. However, solving reasoning tasks often requires long decoding chains (of thoughts), which incur $O(N)$ time and memory consumption, where $N$ is the chain length. To mitigate $O(N)$ time and memory consumption, existing sparsity-based algorithms propose retaining only the most critical token's intermediate data (i.e., key-value cache) and discarding the rest. However, these existing algorithms struggle with the ``impossible trinity'' of accuracy, time, and memory. For example, the state-of-the-art algorithm, Quest, achieves high accuracy with $O(L)$ time but $O(N)$ memory ($L$ is the cache budget, $L ll N$). To address this issue, in this paper, we identify a new attention pattern during the decode stage of reasoning tasks, where milestone tokens (analogous to lemmas in mathematical proofs) emerge, are utilized, and then become unimportant afterward. Based on this pattern, we propose a new algorithm named RaaS that identifies and retains milestone tokens only until they are no longer needed, achieving high accuracy with $O(L)$ time and $O(L)$ memory complexity.
Problem

Research questions and friction points this paper is trying to address.

Reduce O(N) time and memory in long-decoding chains.
Address accuracy, time, and memory trade-offs in LLMs.
Identify and retain milestone tokens for efficient reasoning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reasoning-Aware Attention Sparsity
Milestone Tokens Retention
O(L) Time and Memory Complexity
🔎 Similar Papers
No similar papers found.
J
Junhao Hu
Peking University
W
Wenrui Huang
Nanjing University
W
Weidong Wang
Nanjing University
Z
Zhenwen Li
Peking University
Tiancheng Hu
Tiancheng Hu
University of Cambridge
natural language processingcomputational social science
Z
Zhixia Liu
Huawei Cloud
Xusheng Chen
Xusheng Chen
Huawei Cloud
Distributed SystemsCloud ComputingDistributed Databases
T
Tao Xie
Peking University
Yizhou Shan
Yizhou Shan
Huawei Cloud
DisaggregationOperating SystemDistributed SystemComputer Architecture