🤖 AI Summary
Large language models (LLMs) face diminishing returns in accuracy and prohibitive computational overhead when employing test-time scaling methods—such as self-consistency with majority voting—for complex reasoning tasks. To address this, we propose DeepConf: a lightweight, training-free, hyperparameter-free dynamic pruning method that leverages internal confidence signals generated during LLM autoregressive decoding to prune low-quality reasoning paths in real time. By adaptively filtering reasoning trajectories and aggregating high-confidence outputs, DeepConf achieves efficient inference without compromising correctness. It integrates seamlessly into existing reasoning service frameworks. Experiments on challenging benchmarks—including AIME 2025—demonstrate that DeepConf attains 99.9% accuracy while reducing total token generation by 84.7% compared to full parallel chain-of-thought execution. This yields substantial improvements in the accuracy–efficiency trade-off, establishing a new state-of-the-art in scalable, confidence-aware reasoning.
📝 Abstract
Large Language Models (LLMs) have shown great potential in reasoning tasks through test-time scaling methods like self-consistency with majority voting. However, this approach often leads to diminishing returns in accuracy and high computational overhead. To address these challenges, we introduce Deep Think with Confidence (DeepConf), a simple yet powerful method that enhances both reasoning efficiency and performance at test time. DeepConf leverages model-internal confidence signals to dynamically filter out low-quality reasoning traces during or after generation. It requires no additional model training or hyperparameter tuning and can be seamlessly integrated into existing serving frameworks. We evaluate DeepConf across a variety of reasoning tasks and the latest open-source models, including Qwen 3 and GPT-OSS series. Notably, on challenging benchmarks such as AIME 2025, DeepConf@512 achieves up to 99.9% accuracy and reduces generated tokens by up to 84.7% compared to full parallel thinking.