🤖 AI Summary
This work investigates the trade-off between parallel multi-chain reasoning and serialized stepwise optimization in test-time scaling of language models. Under fixed token and computational budgets, we systematically compare these two paradigms and find that serialized reasoning significantly outperforms mainstream parallel self-consistency approaches. To further enhance accuracy, we propose a training-free inverse-entropy weighted voting mechanism: at each reasoning step, candidate outputs are dynamically weighted by the entropy of their probability distribution, thereby improving final decision reliability. Extensive experiments across multiple models—including GPT-4, Claude, and Llama—on mathematical and logical reasoning benchmarks demonstrate that our method surpasses parallel baselines in 95.6% of configurations, with up to a 46.7% absolute accuracy gain. This study provides the first empirical evidence establishing serialized reasoning as the current state-of-the-art test-time scaling paradigm and delivers a lightweight, model-agnostic, plug-and-play inference optimization framework.
📝 Abstract
We revisit test-time scaling for language model reasoning and ask a fundamental question: at equal token budget and compute, is it better to run multiple independent chains in parallel, or to run fewer chains that iteratively refine through sequential steps? Through comprehensive evaluation across 5 state-of-the-art open source models and 3 challenging reasoning benchmarks, we find that sequential scaling where chains explicitly build upon previous attempts consistently outperforms the dominant parallel self-consistency paradigm in 95.6% of configurations with gains in accuracy upto 46.7%. Further, we introduce inverse-entropy weighted voting, a novel training-free method to further boost the accuracy of sequential scaling. By weighing answers in proportion to the inverse entropy of their reasoning chains, we increase our success rate over parallel majority and establish it as the optimal test-time scaling strategy. Our findings fundamentally challenge the parallel reasoning orthodoxy that has dominated test-time scaling since Wang et al.'s self-consistency decoding (Wang et al., 2022), positioning sequential refinement as the robust default for modern LLM reasoning and necessitating a paradigm shift in how we approach inference-time optimization.