How Long Reasoning Chains Influence LLMs' Judgment of Answer Factuality

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large language models (LLMs) acting as factuality evaluators are susceptible to the superficial fluency of reasoning chains, often failing to accurately assess answer correctness. Through controlled experiments on question-answering and mathematical reasoning benchmarks, the work systematically analyzes the impact of providing model-generated reasoning chains on LLM evaluation behavior. It reveals, for the first time, a divergence in reliance on reasoning chains across evaluator capability levels: weaker models are easily misled by fluent yet incorrect reasoning, while stronger models, though partially leveraging reasoning information, remain vulnerable to the deceptive allure of high-quality surface presentation. The findings underscore the critical need to distinguish genuine reasoning quality from mere fluency and demonstrate that reasoning chains carry both factual signals and potential for misguidance.
📝 Abstract
Large language models (LLMs) has been widely adopted as a scalable surrogate for human evaluation, yet such judges remain imperfect and susceptible to surface-level biases. One possible reason is that these judges lack sufficient information in assessing answer correctness. With the rise of reasoning-capable models, exposing a generator's reasoning content to the judge provides richer information and is a natural candidate for improving judgment accuracy. However, its actual impact on judge behavior remains understudied. In this paper, we systematically investigate how access to reasoning chains affects LLM-based judgment across factual question answering (QA) and mathematical reasoning benchmarks. We find that weak judges are easily swayed by reasoning presence, frequently accepting incorrect answers accompanied by fluent reasoning, while strong judges can partially leverage reasoning as informative evidence. Nevertheless, even strong judges are misled by seemingly high-quality reasoning chains. Controlled experiments further reveal that both fluency and factuality of reasoning chains are critical signals driving judge decisions. These findings highlight the need for more robust LLM judges that can distinguish genuine reasoning quality from superficial fluency when evaluating modern reasoning models.
Problem

Research questions and friction points this paper is trying to address.

factuality judgment
reasoning chains
LLM evaluation
surface bias
answer correctness
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning chains
LLM judges
factuality evaluation
fluency bias
answer correctness
🔎 Similar Papers
No similar papers found.
M
Minzhu Tu
State Key Laboratory of AI Safety; Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences; Beijing University of Post and Telecommunications
S
Shiyu Ni
State Key Laboratory of AI Safety; Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Keping Bi
Keping Bi
Institute of Computing Technology, Chinese Academy of Sciences
Information Retrieval