🤖 AI Summary
Large reasoning models (LRMs) frequently fail to abstain appropriately when confronted with intrinsically unsolvable problems—e.g., those with insufficient conditions—exposing a systemic inconsistency between internal cognitive states and external responses. This paper presents the first systematic characterization of this misalignment. We propose a lightweight, two-stage method: (1) cognitive monitoring, which models the model’s internal uncertainty via auxiliary calibration; and (2) dynamic output intervention, which guides the model to proactively abstain during inference for unsolvable queries. Our approach requires only minimal labeled data and low-cost fine-tuning, without architectural modifications. Experiments demonstrate a substantial 32.7% increase in abstention rate on unsolvable problems, while preserving near-original performance on standard complex reasoning benchmarks (accuracy drop <0.5%). This work establishes a new paradigm for building trustworthy, interpretable reasoning AI systems grounded in calibrated self-awareness and principled abstention.
📝 Abstract
Large reasoning models (LRMs) have shown remarkable progress on complex reasoning tasks. However, some questions posed to LRMs are inherently unanswerable, such as math problems lacking sufficient conditions. We find that LRMs continually fail to provide appropriate abstentions when confronted with these unanswerable questions. In this paper, we systematically analyze, investigate, and resolve this issue for trustworthy AI. We first conduct a detailed analysis of the distinct response behaviors of LRMs when facing unanswerable questions. Then, we show that LRMs possess sufficient cognitive capabilities to recognize the flaws in these questions. However, they fail to exhibit appropriate abstention behavior, revealing a misalignment between their internal cognition and external response. Finally, to resolve this issue, we propose a lightweight, two-stage method that combines cognitive monitoring with inference-time intervention. Experimental results demonstrate that our method significantly improves the abstention rate while maintaining the overall reasoning performance.