🤖 AI Summary
This study addresses two key challenges in monitoring chain-of-thought (CoT) reasoning in large reasoning models (LRMs): (1) CoT traces often fail to faithfully reflect internal decision processes, and (2) monitoring systems are prone to erroneous judgments due to verbose or misleading reasoning. We propose MoME—a novel paradigm that employs large language models as structured monitors, generating verifiable judgments and supporting evidence from CoT traces to jointly model reasoning faithfulness and monitor reliability. Through CoT analysis, controlled intervention comparisons, and correlation modeling, we empirically evaluate diverse reasoning-guidance strategies across mathematical, scientific, and ethical tasks. Results demonstrate that CoT faithfulness critically determines monitoring effectiveness; specific interventions—particularly explicit reflective prompting—simultaneously improve both task performance and monitorability; and MoME exhibits cross-domain feasibility and robustness.
📝 Abstract
Large Reasoning Models (LRMs) have demonstrated remarkable performance on complex tasks by engaging in extended reasoning before producing final answers. Beyond improving abilities, these detailed reasoning traces also create a new opportunity for AI safety, CoT Monitorability: monitoring potential model misbehavior, such as the use of shortcuts or sycophancy, through their chain-of-thought (CoT) during decision-making. However, two key fundamental challenges arise when attempting to build more effective monitors through CoT analysis. First, as prior research on CoT faithfulness has pointed out, models do not always truthfully represent their internal decision-making in the generated reasoning. Second, monitors themselves may be either overly sensitive or insufficiently sensitive, and can potentially be deceived by models'long, elaborate reasoning traces. In this paper, we present the first systematic investigation of the challenges and potential of CoT monitorability. Motivated by two fundamental challenges we mentioned before, we structure our study around two central perspectives: (i) verbalization: to what extent do LRMs faithfully verbalize the true factors guiding their decisions in the CoT, and (ii) monitor reliability: to what extent can misbehavior be reliably detected by a CoT-based monitor? Specifically, we provide empirical evidence and correlation analyses between verbalization quality, monitor reliability, and LLM performance across mathematical, scientific, and ethical domains. Then we further investigate how different CoT intervention methods, designed to improve reasoning efficiency or performance, will affect monitoring effectiveness. Finally, we propose MoME, a new paradigm in which LLMs monitor other models'misbehavior through their CoT and provide structured judgments along with supporting evidence.