MR-Align: Meta-Reasoning Informed Factuality Alignment for Large Reasoning Models

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large reasoning models (LRMs) suffer from a “reasoning–answer alignment gap” in evidence-dependent factual question answering: although they often identify correct facts, they frequently fail to integrate them accurately into final answers, limiting factual fidelity. To address this, we propose a **meta-reasoning calibration framework** that—uniquely—models the reasoning state transition process and constructs an implicit reward signal driven by transition probabilities. We further design a probability-aware atomic thought fragment reweighting mechanism, optimized via implicit reinforcement learning over reasoning trajectories. Our approach requires no external annotations or explicit supervision. Evaluated on four factual QA benchmarks and one long-document factuality benchmark, it significantly improves answer accuracy and factual consistency while effectively suppressing misleading reasoning steps. This work establishes a novel paradigm for enhancing the factual reliability of LRMs.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) show strong capabilities in complex reasoning, yet their marginal gains on evidence-dependent factual questions are limited. We find this limitation is partially attributable to a reasoning-answer hit gap, where the model identifies the correct facts during reasoning but fails to incorporate them into the final response, thereby reducing factual fidelity. To address this issue, we propose MR-ALIGN, a Meta-Reasoning informed alignment framework that enhances factuality without relying on external verifiers. MR-ALIGN quantifies state transition probabilities along the model's thinking process and constructs a transition-aware implicit reward that reinforces beneficial reasoning patterns while suppressing defective ones at the atomic thinking segments. This re-weighting reshapes token-level signals into probability-aware segment scores, encouraging coherent reasoning trajectories that are more conducive to factual correctness. Empirical evaluations across four factual QA datasets and one long-form factuality benchmark show that MR-ALIGN consistently improves accuracy and truthfulness while reducing misleading reasoning. These results highlight that aligning the reasoning process itself, rather than merely the outputs, is pivotal for advancing factuality in LRMs.
Problem

Research questions and friction points this paper is trying to address.

Bridges reasoning-answer gap in large reasoning models
Enhances factuality without external verification systems
Aligns reasoning process to improve factual correctness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework enhances factuality without external verifiers
Quantifies state transition probabilities in reasoning process
Reweights token signals into probability-aware segment scores
🔎 Similar Papers
No similar papers found.
X
Xinming Wang
Institute of Automation, Chinese Academy of Sciences
J
Jian Xu
Institute of Automation, Chinese Academy of Sciences
B
Bin Yu
Zhongguancun Academy
S
Sheng Lian
Institute of Automation, Chinese Academy of Sciences
H
Hongzhu Yi
School of Computer Science and Technology, UCAS
Y
Yi Chen
Institute of Automation, Chinese Academy of Sciences
Y
Yingjian Zhu
Institute of Automation, Chinese Academy of Sciences
B
Boran Wang
Zhongguancun Academy
H
Hongming Yang
Tencent
H
Han Hu
Tencent
Xu-Yao Zhang
Xu-Yao Zhang
Institute of Automation, Chinese Academy of Sciences
Pattern RecognitionMachine LearningOCR
C
Cheng-Lin Liu
Institute of Automation, Chinese Academy of Sciences