π€ AI Summary
Large language models (LLMs) frequently err in mathematical reasoning due to high entropy and high entropy variance in their output distributions, particularly at critical reasoning steps. To address this, we propose an entropy-sensitive dynamic branching generation mechanism: for the first time, we jointly model token-level entropy and entropy variance to precisely localize reasoning vulnerabilities; at such points, we parallelly expand multiple high-probability token continuations. Furthermore, we employ a larger LLM as an external feedback module to perform cross-scale consistency and accuracy scoring across all candidate paths, enabling robust selection of the optimal reasoning chain. Crucially, our method requires no fine-tuning of the base small LLMβonly decoding-time modifications. Evaluated on mathematical word problems and calculation tasks, it boosts the accuracy of small-scale LLMs by up to 4.6%, substantially outperforming standard argmax decoding.
π Abstract
While Large Language Models (LLMs) are effectively aligned through extensive pre-training and fine-tuning, they still struggle with varying levels of uncertainty during token generation. In our investigation of mathematical reasoning, we observe that errors are more likely to arise at tokens exhibiting high entropy and variance of entropy in the model's output distribution. Based on the observation, we propose a novel approach that dynamically branches the generation process on demand instead of defaulting to the single most probable token. By exploring in parallel multiple branches stemming from high probability tokens of critical decision points, the model can discover diverse reasoning paths that might otherwise be missed. We further harness external feedback from larger models to rank and select the most coherent and accurate reasoning branch. Our experimental results on mathematical word problems and calculation questions show that this branching strategy boosts the reasoning capabilities of small LLMs up to 4.6% compared to conventional argmax decoding.