🤖 AI Summary
This work addresses a critical limitation of existing post-training methods, which often neglect environmental feedback and consequently struggle to enable agents to recover from errors and generalize in long-horizon tasks. To overcome this, the paper introduces LEAFE, a novel framework that, for the first time, transforms reflective experiences into learnable recovery strategies. LEAFE synthesizes actionable insights from environmental feedback, rewinds to earlier decision points to explore alternative trajectories, and internalizes these experiences into the policy model via supervised fine-tuning. Evaluated under a fixed interaction budget, LEAFE substantially improves multi-path task success rates, consistently outperforming strong baselines such as GRPO across interactive programming and agent-based benchmarks. Notably, it achieves consistent gains in Pass@1 and boosts Pass@128 by up to 14%.
📝 Abstract
Large language models are increasingly deployed as autonomous agents that must plan, act, and recover from mistakes through long-horizon interaction with environments that provide rich feedback. However, prevailing outcome-driven post-training methods (e.g., RL with verifiable rewards) primarily optimize final success signals, leaving rich environment feedback underutilized. Consequently, they often lead to distribution sharpening: the policy becomes better at reproducing a narrow set of already-successful behaviors, while failing to improve the feedback-grounded agency needed to expand problem-solving capacity (e.g., Pass@k) in long-horizon settings.
To address this, we propose LEAFE (Learning Feedback-Grounded Agency from Reflective Experience), a framework that internalizes recovery agency from reflective experience. Specifically, during exploration, the agent summarizes environment feedback into actionable experience, backtracks to earlier decision points, and explores alternative branches with revised actions. We then distill these experience-guided corrections into the model through supervised fine-tuning, enabling the policy to recover more effectively in future interactions. Across a diverse set of interactive coding and agentic tasks under fixed interaction budgets, LEAFE consistently improves Pass@1 over the base model and achieves higher Pass@k than outcome-driven baselines (GRPO) and experience-based methods such as Early Experience, with gains of up to 14% on Pass@128.