🤖 AI Summary
This work proposes a confidence-aware adaptive reasoning framework to address the inefficiency of large language models in chain-of-thought reasoning, where they often generate unnecessarily lengthy reasoning paths, and multi-path self-consistency methods, while improving accuracy, incur substantial computational overhead. The proposed approach uniquely estimates uncertainty from sentence-level semantic and numerical features within a single reasoning trajectory, enabling dynamic decisions—without any fine-tuning—on whether to invoke costly multi-path reasoning. Trained solely on MedQA, the method demonstrates strong generalization across diverse benchmarks including MathQA, MedMCQA, and MMLU. It achieves accuracy comparable to multi-path baselines while reducing token consumption by up to 80%, substantially enhancing inference efficiency.
📝 Abstract
Large language models (LLMs) achieve strong reasoning performance through chain-of-thought (CoT) reasoning, yet often generate unnecessarily long reasoning paths that incur high inference cost. Recent self-consistency-based approaches further improve accuracy but require sampling and aggregating multiple reasoning trajectories, leading to substantial additional computational overhead. This paper introduces a confidence-aware decision framework that analyzes a single completed reasoning trajectory to adaptively select between single-path and multi-path reasoning. The framework is trained using sentence-level numeric and linguistic features extracted from intermediate reasoning states in the MedQA dataset and generalizes effectively to MathQA, MedMCQA, and MMLU without additional fine-tuning. Experimental results show that the proposed method maintains accuracy comparable to multi-path baselines while using up to 80\% fewer tokens. These findings demonstrate that reasoning trajectories contain rich signals for uncertainty estimation, enabling a simple, transferable mechanism to balance accuracy and efficiency in LLM reasoning.