🤖 AI Summary
Large language models (LLMs) exhibit an “answer–explanation” ordering dependency in reasoning: generating answers before explanations (answer-then-reason) increases hallucination risk, whereas explanation-before-answer (reason-then-answer) yields more reliable outputs—revealing intrinsic inconsistencies between logical structure and factual accuracy.
Method: This work is the first to identify response ordering (answer→reason vs. reason→answer) as a critical hallucination trigger. We propose a lightweight consistency evaluation benchmark based on dual-path response contrast and introduce a reflection-based prompting paradigm that explicitly enforces reasoning-first generation without fine-tuning.
Contribution/Results: Extensive evaluation across multiple state-of-the-art LLMs demonstrates that our approach significantly reduces typical hallucinations (e.g., numerical comparison errors), improves average accuracy by 12.7%, and exhibits strong cross-model robustness.
📝 Abstract
Large language models (LLMs) have generated significant attention since their inception, finding applications across various academic and industrial domains. However, these models often suffer from the"hallucination problem", where outputs, though grammatically and logically coherent, lack factual accuracy or are entirely fabricated. A particularly troubling issue discovered and widely discussed recently is the numerical comparison error where multiple LLMs incorrectly infer that"9.11$>$9.9". We discovered that the order in which LLMs generate answers and reasoning impacts their consistency. Specifically, results vary significantly when an LLM generates an answer first and then provides the reasoning versus generating the reasoning process first and then the conclusion. Inspired by this, we propose a new benchmark method for assessing LLM consistency: comparing responses generated through these two different approaches. This benchmark effectively identifies instances where LLMs fabricate answers and subsequently generate justifications. Furthermore, we introduce a novel and straightforward prompt strategy designed to mitigate this issue. Experimental results demonstrate that this strategy improves performance across various LLMs compared to direct questioning. This work not only sheds light on a critical flaw in LLMs but also offers a practical solution to enhance their reliability.