🤖 AI Summary
Large reasoning models (LRMs) suffer from a “reasoning–answer alignment gap” in evidence-dependent factual question answering: although they often identify correct facts, they frequently fail to integrate them accurately into final answers, limiting factual fidelity. To address this, we propose a **meta-reasoning calibration framework** that—uniquely—models the reasoning state transition process and constructs an implicit reward signal driven by transition probabilities. We further design a probability-aware atomic thought fragment reweighting mechanism, optimized via implicit reinforcement learning over reasoning trajectories. Our approach requires no external annotations or explicit supervision. Evaluated on four factual QA benchmarks and one long-document factuality benchmark, it significantly improves answer accuracy and factual consistency while effectively suppressing misleading reasoning steps. This work establishes a novel paradigm for enhancing the factual reliability of LRMs.
📝 Abstract
Large reasoning models (LRMs) show strong capabilities in complex reasoning, yet their marginal gains on evidence-dependent factual questions are limited. We find this limitation is partially attributable to a reasoning-answer hit gap, where the model identifies the correct facts during reasoning but fails to incorporate them into the final response, thereby reducing factual fidelity. To address this issue, we propose MR-ALIGN, a Meta-Reasoning informed alignment framework that enhances factuality without relying on external verifiers. MR-ALIGN quantifies state transition probabilities along the model's thinking process and constructs a transition-aware implicit reward that reinforces beneficial reasoning patterns while suppressing defective ones at the atomic thinking segments. This re-weighting reshapes token-level signals into probability-aware segment scores, encouraging coherent reasoning trajectories that are more conducive to factual correctness. Empirical evaluations across four factual QA datasets and one long-form factuality benchmark show that MR-ALIGN consistently improves accuracy and truthfulness while reducing misleading reasoning. These results highlight that aligning the reasoning process itself, rather than merely the outputs, is pivotal for advancing factuality in LRMs.