🤖 AI Summary
Existing LLM inference methods suffer from limitations: sequential decoding relies on fixed token budgets, leading to premature termination or inefficiency; parallel decoding lacks inter-branch coordination and typically requires fine-tuning. This paper proposes a semantic entropy-guided multi-round parallel adaptive termination framework. For the first time, it introduces semantic entropy as an unsupervised, fine-tuning-free intrinsic quality metric—revealing a negative correlation between semantic diversity among parallel responses and overall accuracy. By dynamically evaluating semantic entropy across inference rounds, the method enables coordinated path pruning and optimal termination timing. Integrating the depth of sequential reasoning with the breadth of parallel exploration, it significantly improves accuracy and computational efficiency on complex tasks—including mathematical reasoning and commonsense QA—while reducing redundant generation, enhancing inference stability, and improving cross-task generalization.
📝 Abstract
Recent advances in large language models (LLMs) have accelerated progress toward artificial general intelligence, with inference-time scaling emerging as a key technique. Contemporary approaches leverage either sequential reasoning (iteratively extending chains of thought) or parallel reasoning (generating multiple solutions simultaneously) to scale inference. However, both paradigms face fundamental limitations: sequential scaling typically relies on arbitrary token budgets for termination, leading to inefficiency or premature cutoff; while parallel scaling often lacks coordination among parallel branches and requires intrusive fine-tuning to perform effectively. In light of these challenges, we aim to design a flexible test-time collaborative inference framework that exploits the complementary strengths of both sequential and parallel reasoning paradigms. Towards this goal, the core challenge lies in developing an efficient and accurate intrinsic quality metric to assess model responses during collaborative inference, enabling dynamic control and early termination of the reasoning trace. To address this challenge, we introduce semantic entropy (SE), which quantifies the semantic diversity of parallel model responses and serves as a robust indicator of reasoning quality due to its strong negative correlation with accuracy...