🤖 AI Summary
Existing chain-of-thought (CoT) methods suffer from “over-reasoning” on simple problems, leading to redundant inference steps; meanwhile, prevailing length-penalization strategies ignore inherent problem complexity variations. This paper proposes an adaptive reasoning trigger mechanism that activates CoT only when necessary. Our contributions are threefold: (1) the first contrastive reward paradigm jointly optimizing inference length and quality—enabling training on ambiguous tasks without ground-truth rationales; (2) an uncertainty-aware reasoning gate that dynamically controls inference depth based on problem complexity; and (3) theoretical guarantees for joint optimization of accuracy and conciseness. Evaluated across multiple benchmarks, our method maintains original accuracy while reducing average reasoning steps by 37%, yielding significantly more concise and interpretable explanations—achieving true “on-demand reasoning.”
📝 Abstract
Chain of Thought (CoT) reasoning enhances language models' performance but often leads to inefficient"overthinking"on simple problems. We identify that existing approaches directly penalizing reasoning length fail to account for varying problem complexity. Our approach constructs rewards through length and quality comparisons, guided by theoretical assumptions that jointly enhance solution correctness with conciseness. Moreover, we further demonstrate our method to fuzzy tasks where ground truth is unavailable. Experiments across multiple reasoning benchmarks demonstrate that our method maintains accuracy while generating significantly more concise explanations, effectively teaching models to"think when needed."