🤖 AI Summary
This work addresses the limitation of current large language models in agent training, which rely heavily on imitation learning and lack the capacity for autonomous evaluation and reflection on action quality. The authors propose a novel reinforcement learning paradigm that leverages an action contrastive judgment task—where the model identifies superior actions among multiple candidates—and uses the accuracy of these judgments as a reward signal to intrinsically cultivate self-reflection and reasoning capabilities. Notably, this approach requires no additional reflection annotations or specialized reasoning data, thereby overcoming key constraints of conventional imitation learning and reflection-based knowledge distillation. Experimental results demonstrate consistent improvements, with average gains of 5.07, 4.62, and 2.42 points over imitation learning, standard reinforcement learning, and knowledge distillation baselines, respectively, across three agent benchmarks. The method also exhibits enhanced generalization on out-of-distribution and general reasoning tasks.
📝 Abstract
Training large language models (LLMs) as autonomous agents often begins with imitation learning, but it only teaches agents what to do without understanding why: agents never contrast successful actions against suboptimal alternatives and thus lack awareness of action quality. Recent approaches attempt to address this by introducing self-reflection supervision derived from contrasts between expert and alternative actions. However, the training paradigm fundamentally remains imitation learning: the model imitates pre-constructed reflection text rather than learning to reason autonomously. We propose Agentic Critical Training (ACT), a reinforcement learning paradigm that trains agents to identify the better action among alternatives. By rewarding whether the model's judgment is correct, ACT drives the model to autonomously develop reasoning about action quality, producing genuine self-reflection rather than imitating it. Across three challenging agent benchmarks, ACT consistently improves agent performance when combined with different post-training methods. It achieves an average improvement of 5.07 points over imitation learning and 4.62 points over reinforcement learning. Compared to approaches that inject reflection capability through knowledge distillation, ACT also demonstrates clear advantages, yielding an average improvement of 2.42 points. Moreover, ACT enables strong out-of-distribution generalization on agentic benchmarks and improves performance on general reasoning benchmarks without any reasoning-specific training data, highlighting the value of our method. These results suggest that ACT is a promising path toward developing more reflective and capable LLM agents.