🤖 AI Summary
Imitation learning for multi-turn language model agents suffers from covariate shift: once the student policy deviates from the expert’s state-action distribution, it encounters out-of-distribution states unseen during training, leading to catastrophic generalization failure. To address this, we propose Online Expert Correction (OEC), a novel data-generation paradigm—“student-initiated, expert-intervened, trajectory-corrected”—that synergistically integrates offline demonstrations with online interaction to dynamically mitigate covariate shift. OEC performs end-to-end optimization via online rollouts, expert takeover upon detected divergence, reward-guided rejection sampling, and joint supervised fine-tuning. Evaluated on SWE-bench, OEC improves task success rates by 14% and 13% for 7B and 32B models, respectively, substantially overcoming the generalization bottleneck inherent in conventional offline imitation learning for sequential decision-making tasks.
📝 Abstract
A popular paradigm for training LM agents relies on imitation learning, fine-tuning on expert trajectories. However, we show that the off-policy nature of imitation learning for multi-turn LM agents suffers from the fundamental limitation known as covariate shift: as the student policy's behavior diverges from the expert's, it encounters states not present in the training data, reducing the effectiveness of fine-tuning. Taking inspiration from the classic DAgger algorithm, we propose a novel data generation methodology for addressing covariate shift for multi-turn LLM training. We introduce on-policy expert corrections (OECs), partially on-policy data generated by starting rollouts with a student model and then switching to an expert model part way through the trajectory. We explore the effectiveness of our data generation technique in the domain of software engineering (SWE) tasks, a multi-turn setting where LLM agents must interact with a development environment to fix software bugs. Our experiments compare OEC data against various other on-policy and imitation learning approaches on SWE agent problems and train models using a common rejection sampling (i.e., using environment reward) combined with supervised fine-tuning technique. Experiments find that OEC trajectories show a relative 14% and 13% improvement over traditional imitation learning in the 7b and 32b setting, respectively, on SWE-bench verified. Our results demonstrate the need for combining expert demonstrations with on-policy data for effective multi-turn LM agent training.