Are Reasoning Models More Prone to Hallucination?

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the hallucination propensity of Large Reasoning Models (LRMs) in fact-seeking tasks and its underlying causes. We conduct comprehensive stack-level evaluation, behavioral trajectory analysis, and comparative multi-stage post-training experiments—including supervised fine-tuning (SFT) cold-start, verifiability-aware reinforcement learning (RL), and knowledge distillation. We first propose a dual behavioral characterization of hallucination: “defective repetition” and “reasoning–answer misalignment,” and introduce an uncertainty–factuality misalignment analytical framework. Results show that SFT cold-start combined with verifiability-aware RL significantly suppresses hallucinations, whereas pure distillation or RL without cold-start induces more latent hallucinations. Moreover, model uncertainty is systematically underestimated and strongly correlates with factual errors. Our work clarifies the causal mechanisms linking post-training strategies to hallucination emergence, providing both theoretical foundations and practical guidelines for controllable reasoning modeling.

Technology Category

Application Category

📝 Abstract
Recently evolved large reasoning models (LRMs) show powerful performance in solving complex tasks with long chain-of-thought (CoT) reasoning capability. As these LRMs are mostly developed by post-training on formal reasoning tasks, whether they generalize the reasoning capability to help reduce hallucination in fact-seeking tasks remains unclear and debated. For instance, DeepSeek-R1 reports increased performance on SimpleQA, a fact-seeking benchmark, while OpenAI-o3 observes even severer hallucination. This discrepancy naturally raises the following research question: Are reasoning models more prone to hallucination? This paper addresses the question from three perspectives. (1) We first conduct a holistic evaluation for the hallucination in LRMs. Our analysis reveals that LRMs undergo a full post-training pipeline with cold start supervised fine-tuning (SFT) and verifiable reward RL generally alleviate their hallucination. In contrast, both distillation alone and RL training without cold start fine-tuning introduce more nuanced hallucinations. (2) To explore why different post-training pipelines alters the impact on hallucination in LRMs, we conduct behavior analysis. We characterize two critical cognitive behaviors that directly affect the factuality of a LRM: Flaw Repetition, where the surface-level reasoning attempts repeatedly follow the same underlying flawed logic, and Think-Answer Mismatch, where the final answer fails to faithfully match the previous CoT process. (3) Further, we investigate the mechanism behind the hallucination of LRMs from the perspective of model uncertainty. We find that increased hallucination of LRMs is usually associated with the misalignment between model uncertainty and factual accuracy. Our work provides an initial understanding of the hallucination in LRMs.
Problem

Research questions and friction points this paper is trying to address.

Investigates if reasoning models increase hallucination in fact-seeking tasks
Analyzes post-training pipelines' impact on hallucination in large reasoning models
Explores cognitive behaviors and uncertainty causing hallucination in reasoning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-training pipeline reduces LRM hallucination
Behavior analysis identifies flawed cognitive patterns
Model uncertainty misalignment increases hallucination
🔎 Similar Papers
No similar papers found.
Z
Zijun Yao
Department of Computer Science and Technology, Tsinghua University
Yantao Liu
Yantao Liu
Qwen, Alibaba
Reinforcement LearningReward ModelingLarge Language Models
Y
Yanxu Chen
Department of Computer Science and Technology, Tsinghua University
J
Jianhui Chen
Department of Computer Science and Technology, Tsinghua University
Junfeng Fang
Junfeng Fang
National University of Singapore
Model EditingAI SafetyLLM ExplainabilityAI4Science
Lei Hou
Lei Hou
RMIT University
Building Information Modeling (BIM) - Project Management - Construction IT - Productivity Research - Lean Construction
Juanzi Li
Juanzi Li
Tsinghua University
Semantic Webdata miningNLP
T
Tat-Seng Chua
School of Computing, National University of Singapore