🤖 AI Summary
This work addresses the prevalence of “hallucinated” reasoning in large reasoning models—plausible-sounding but unfaithful explanations that misrepresent the model’s actual decision process, thereby undermining trustworthiness. The authors propose the first formal framework that decouples reasoning faithfulness from accuracy, introducing two verifiable criteria: stance consistency and causal influence, to evaluate faithfulness. They further construct the RFEval benchmark, which employs output-level counterfactual interventions to probe the structural integrity of reasoning traces. Evaluations across 7,186 samples from 12 open-source models reveal that 49.7% of outputs suffer from unfaithful reasoning, primarily due to stance inconsistency. Notably, faithfulness shows no significant correlation with accuracy, suggesting that current reinforcement learning fine-tuning practices may inadvertently compromise reasoning fidelity.
📝 Abstract
Large Reasoning Models (LRMs) exhibit strong performance, yet often produce rationales that sound plausible but fail to reflect their true decision process, undermining reliability and trust. We introduce a formal framework for reasoning faithfulness, defined by two testable conditions: stance consistency (a coherent stance linking reasoning to answer) and causal influence (the stated reasoning causally drives the answer under output-level interventions), explicitly decoupled from accuracy. To operationalize this, we present RFEval, a benchmark of 7,186 instances across seven tasks that probes faithfulness via controlled, output-level counterfactual interventions. Evaluating twelve open-source LRMs, we find unfaithfulness in 49.7% of outputs, predominantly from stance inconsistency. Failures are concentrated in brittle, convergent domains such as math and code, and correlate more with post-training regimes than with scale: within-family ablations indicate that adding current RL-style objectives on top of supervised fine-tuning can reduce reasoning faithfulness, even when accuracy is maintained. Crucially, accuracy is neither a sufficient nor a reliable proxy for faithfulness: once controlling for model and task, the accuracy-faithfulness link is weak and statistically insignificant. Our work establishes a rigorous methodology for auditing LRM reliability and shows that trustworthy AI requires optimizing not only for correct outcomes but also for the structural integrity of the reasoning process. Our code and dataset can be found at project page: $\href{https://aidaslab.github.io/RFEval/}{https://aidaslab.github.io/RFEval/}$