Decoding the Critique Mechanism in Large Reasoning Models

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how large reasoning models (LRMs) detect and correct errors in intermediate reasoning steps, with a particular focus on the phenomenon where incorrect intermediate steps nonetheless yield correct final answers. By deliberately injecting arithmetic errors into chain-of-thought reasoning and combining feature-space analysis with latent representation manipulation, the work uncovers and formally names an intrinsic “latent critical ability” within LRMs. It further identifies an interpretable critical vector that encodes this capability. Remarkably, this vector enhances the model’s error detection and self-correction performance at test time without requiring additional training. Extensive experiments across multiple model scales and families demonstrate the effectiveness of the approach, yielding significant improvements in both reasoning accuracy and robustness.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) exhibit backtracking and self-verification mechanisms that enable them to revise intermediate steps and reach correct solutions, yielding strong performance on complex logical benchmarks. We hypothesize that such behaviors are beneficial only when the model has sufficiently strong "critique" ability to detect its own mistakes. This work systematically investigates how current LRMs recover from errors by inserting arithmetic mistakes in their intermediate reasoning steps. Notably, we discover a peculiar yet important phenomenon: despite the error propagating through the chain-of-thought (CoT), resulting in an incorrect intermediate conclusion, the model still reaches the correct final answer. This recovery implies that the model must possess an internal mechanism to detect errors and trigger self-correction, which we refer to as the hidden critique ability. Building on feature space analysis, we identify a highly interpretable critique vector representing this behavior. Extensive experiments across multiple model scales and families demonstrate that steering latent representations with this vector improves the model's error detection capability and enhances the performance of test-time scaling at no extra training cost. Our findings provide a valuable understanding of LRMs' critique behavior, suggesting a promising direction to control and improve their self-verification mechanism. Our code is available at https://github.com/mail-research/lrm-critique-vectors.
Problem

Research questions and friction points this paper is trying to address.

Large Reasoning Models
critique mechanism
error recovery
self-verification
chain-of-thought
Innovation

Methods, ideas, or system contributions that make the work stand out.

critique mechanism
large reasoning models
self-correction
critique vector
test-time scaling
🔎 Similar Papers
No similar papers found.
H
Hoang Phan
College of Engineering and Computer Science, VinUniversity, Vietnam
Q
Quang H. Nguyen
College of Engineering and Computer Science, VinUniversity, Vietnam
H
Hung T. Q. Le
College of Engineering and Computer Science, VinUniversity, Vietnam
Xiusi Chen
Xiusi Chen
Postdoctoral Fellow, University of Illinois Urbana-Champaign
Language ModelsNeuro-Symbolic AIReasoning and PlanningLLM Alignment
Heng Ji
Heng Ji
Professor of Computer Science, AICE Director, ASKS Director, UIUC, Amazon Scholar
Natural Language ProcessingLarge Language Models
K
Khoa D. Doan
College of Engineering and Computer Science, VinUniversity, Vietnam