🤖 AI Summary
This work investigates whether synthetic error injection under supervised learning can effectively endow large language models (LLMs) with self-correction capability—serving as a low-cost alternative to costly reinforcement learning (RL). Method: We propose manually inserting and masking errors within reasoning chains, training models to detect and rectify them; drawing inspiration from error-simulation paradigms in autonomous driving and robotics, we conduct cross-model ablation studies. Contribution/Results: We identify a fundamental distributional shift between synthetically injected errors and policy-induced (endogenous) errors, severely limiting correction performance: models often re-generate the original error even after successful detection. This is the first systematic study to expose the intrinsic limitations of supervised error injection for self-correction tasks. Our findings clarify why online policy-based RL remains superior for LLM self-optimization and provide critical empirical evidence for selecting appropriate self-improvement paradigms.
📝 Abstract
Reinforcement learning has become the dominant paradigm for eliciting reasoning and self-correction capabilities in large language models, but its computational expense motivates exploration of alternatives. Inspired by techniques from autonomous driving and robotics, we investigate whether supervised learning with synthetic error injection can induce self-correction abilities in language models. Our approach inserts artificial errors into reasoning chains, masks them, and supervises the model to recognize and correct these mistakes. Despite the intuitive appeal of this method, we find that it fails to significantly improve performance even on simple synthetic tasks across multiple models. Moreover, even when the model catches its own error, it often parrots the original mistake. We find that the distribution shift of synthetic errors to on-policy errors significantly degrades the error-correction capabilities of the fine-tuned model, even with good synthetic coverage of on-policy errors. Our results help explain why on-policy reinforcement learning methods have proven uniquely effective for eliciting self-correction.