Entropy-Aware Branching for Improved Mathematical Reasoning

πŸ“… 2025-03-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) frequently err in mathematical reasoning due to high entropy and high entropy variance in their output distributions, particularly at critical reasoning steps. To address this, we propose an entropy-sensitive dynamic branching generation mechanism: for the first time, we jointly model token-level entropy and entropy variance to precisely localize reasoning vulnerabilities; at such points, we parallelly expand multiple high-probability token continuations. Furthermore, we employ a larger LLM as an external feedback module to perform cross-scale consistency and accuracy scoring across all candidate paths, enabling robust selection of the optimal reasoning chain. Crucially, our method requires no fine-tuning of the base small LLMβ€”only decoding-time modifications. Evaluated on mathematical word problems and calculation tasks, it boosts the accuracy of small-scale LLMs by up to 4.6%, substantially outperforming standard argmax decoding.

Technology Category

Application Category

πŸ“ Abstract
While Large Language Models (LLMs) are effectively aligned through extensive pre-training and fine-tuning, they still struggle with varying levels of uncertainty during token generation. In our investigation of mathematical reasoning, we observe that errors are more likely to arise at tokens exhibiting high entropy and variance of entropy in the model's output distribution. Based on the observation, we propose a novel approach that dynamically branches the generation process on demand instead of defaulting to the single most probable token. By exploring in parallel multiple branches stemming from high probability tokens of critical decision points, the model can discover diverse reasoning paths that might otherwise be missed. We further harness external feedback from larger models to rank and select the most coherent and accurate reasoning branch. Our experimental results on mathematical word problems and calculation questions show that this branching strategy boosts the reasoning capabilities of small LLMs up to 4.6% compared to conventional argmax decoding.
Problem

Research questions and friction points this paper is trying to address.

Addresses high entropy uncertainty in LLM token generation
Improves mathematical reasoning via dynamic branching strategy
Enhances small LLM performance using external feedback ranking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic branching during token generation
Parallel exploration of high-probability reasoning paths
External feedback for branch ranking and selection
πŸ”Ž Similar Papers
No similar papers found.