🤖 AI Summary
Large reasoning models (LRMs) often incur redundant computation and excessively long inference paths due to the absence of early termination mechanisms. To address this, we propose Just-Enough Thinking (JET), a reinforcement learning framework inspired by evidence accumulation models. JET explicitly models the “when-to-stop” decision via trajectory truncation and a length-aware reward that penalizes unnecessary reasoning steps while rewarding high-quality intermediate reasoning outcomes. Crucially, JET enables models to learn dynamic, input-adaptive stopping policies end-to-end during training—without requiring post-hoc heuristics or external supervision. On challenging reasoning benchmarks including Olympiad, JET reduces average output length by 46.3% while improving accuracy by 4.6%, marking the first demonstration of concurrent gains in both inference efficiency and reasoning fidelity. This establishes JET as a learnable, generalizable paradigm for adaptive termination in deep reasoning systems.
📝 Abstract
Large Reasoning Models (LRMs) have achieved impressive performance on challenging tasks, yet their deep reasoning often incurs substantial computational costs. To achieve efficient reasoning, existing reinforcement learning methods still struggle to construct short reasoning path during the rollout stage, limiting effective learning. Inspired by Evidence Accumulation Models, we find that LRMs have accumulated sufficient information early in reasoning, making further reasoning steps redundant. Based on this insight, we propose Just-Enough Thinking (JET), which trains models to proactively terminate unnecessary reasoning. JET performs trajectory truncation during rollout to expose the model to short, distributionally consistent reasoning paths. Besides, it uses a quality-controlled length reward to better encourage concise reasoning while maintaining correctness. Extensive experiments demonstrate that JET significantly improves reasoning efficiency without sacrificing accuracy. Especially, DeepSeek-Distill-Qwen-1.5B achieves a 4.6% accuracy gain while reducing output length by 46.3% on the Olympiad benchmark. Our code is available in the GitHub.