Dynamic Thinking-Token Selection for Efficient Reasoning in Large Reasoning Models

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the substantial computational redundancy and memory overhead in large reasoning models during the generation of reasoning traces, which significantly hampers inference efficiency. The study reveals, for the first time, that only a small subset of critical tokens predominantly governs the final decision. Building on this insight, the authors propose a dynamic key-value (KV) cache pruning mechanism grounded in attention graph analysis, which identifies and retains only those tokens that are decisive for the answer while discarding redundant ones. This approach markedly reduces both memory consumption and computational cost, achieving highly efficient inference with negligible degradation in reasoning performance.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) excel at solving complex problems by explicitly generating a reasoning trace before deriving the final answer. However, these extended generations incur substantial memory footprint and computational overhead, bottlenecking LRMs'efficiency. This work uses attention maps to analyze the influence of reasoning traces and uncover an interesting phenomenon: only some decision-critical tokens in a reasoning trace steer the model toward the final answer, while the remaining tokens contribute negligibly. Building on this observation, we propose Dynamic Thinking-Token Selection (DynTS). This method identifies decision-critical tokens and retains only their associated Key-Value (KV) cache states during inference, evicting the remaining redundant entries to optimize efficiency.
Problem

Research questions and friction points this paper is trying to address.

Large Reasoning Models
reasoning trace
memory footprint
computational overhead
KV cache
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Thinking-Token Selection
Large Reasoning Models
KV cache pruning
attention analysis
efficient reasoning
Z
Zhenyuan Guo
Zhejiang University, Hangzhou, China
T
Tong Chen
Zhejiang University, Hangzhou, China
W
Wenlong Meng
Zhejiang University, Hangzhou, China
Chen Gong
Chen Gong
University of Virginia
PrivacyAI SecurityReinforcement LearningSoftware Engineering
X
Xin Yu
Ningbo Tech University, Ningbo, China
Chengkun Wei
Chengkun Wei
Zhejiang University
Network SystemData PrivacyMachine Learning Security
Wenzhi Chen
Wenzhi Chen
Chang Gung University
industrial designdesign educationlearningteaching