🤖 AI Summary
Existing LLM decoding strategies—such as temperature scaling and nucleus sampling—suffer performance degradation in NLP tasks under high predictive uncertainty. To address this, we propose a training-free cautious decoding method that dynamically assesses uncertainty via prediction entropy and adaptively triggers parallel multi-path sampling. Each path is scored by perplexity and further refined using punctuation-based stopping criteria to select the optimal continuation. Our approach uniquely integrates entropy-driven dynamic exploration with a human-inspired, multi-path deliberative decision-making mechanism, establishing a novel cognitively grounded decoding paradigm. Experiments demonstrate consistent and significant improvements over mainstream decoding strategies across diverse LLMs and multimodal models. Moreover, our method is orthogonal to techniques such as self-consistency, enabling complementary integration to further enhance robustness and accuracy.
📝 Abstract
Next token prediction paradigm has been prevailing for autoregressive models in the era of LLMs. The current default sampling choice for popular LLMs is temperature scaling together with nucleus sampling to balance diversity and coherence. Nevertheless, such approach leads to inferior performance in various NLP tasks when the model is not certain about testing questions. To this end, we propose a brand new training-free decoding strategy, dubbed as Cautious Next Token Prediction (CNTP). In the decoding process, if the model has comparatively high prediction entropy at a certain step, we sample multiple trials starting from the step independently and stop when encountering any punctuation. Then we select the trial with the lowest perplexity score viewed as the most probable and reliable trial path given the model's capacity. The trial number is negatively correlated with the prediction confidence, i.e., the less confident the model is, the more trials it should sample. This is consistent with human beings' behaviour: when feeling uncertain or unconfident, one tends to think more creatively, exploring multiple thinking paths, to cautiously select the path one feels most confident about. Extensive experiments on both LLMs and MLLMs show that our proposed CNTP approach outperforms existing standard decoding strategies consistently by a clear margin. Moreover, the integration of CNTP with self consistency can further improve over vanilla self consistency. We believe our proposed CNTP has the potential to become one of the default choices for LLM decoding. Code is available at https://github.com/wyzjack/CNTP.