🤖 AI Summary
This work addresses the vulnerability of reinforcement learning–trained reasoning models to catastrophic failures when confronted with flawed contexts—such as incorrect reasoning chains, misleading intermediate steps, or minor input perturbations. To mitigate this, the authors propose GASP (Guided Adversarial Self-Play), a method that establishes an internal adversarial mechanism within a single model by pitting a “polluter” against a “repairer.” Relying solely on verifiable reward signals and without requiring human annotations or external teachers, GASP leverages self-generated repair samples to provide in-distribution guidance, forming an effective curriculum driven by adversarial perturbations. This approach enhances robustness to contaminated contexts while mitigating catastrophic forgetting of existing capabilities. Experiments across four open-source models (1.5B–8B parameters) demonstrate that GASP substantially improves recovery from misleading or perturbed inputs and often boosts accuracy even on original, clean data.
📝 Abstract
Reinforcement learning from verifiable rewards (RLVR) produces strong reasoning models, yet they can fail catastrophically when the conditioning context is fallible (e.g., corrupted chain-of-thought, misleading partial solutions, or mild input perturbations), since standard RLVR optimizes final-answer correctness only under clean conditioning. We introduce GASP (Guided Adversarial Self-Play), a robustification method that explicitly trains detect-and-repair capabilities using only outcome verification. Without human labels or external teachers, GASP forms an adversarial self-play game within a single model: a polluter learns to induce failure via locally coherent corruptions, while an agent learns to diagnose and recover under the same corrupted conditioning. To address the scarcity of successful recoveries early in training, we propose in-distribution repair guidance, an imitation term on self-generated repairs that increases recovery probability while preserving previously acquired capabilities. Across four open-weight models (1.5B--8B), GASP transforms strong-but-brittle reasoners into robust ones that withstand misleading and perturbed context while often improving clean accuracy. Further analysis shows that adversarial corruptions induce an effective curriculum, and in-distribution guidance enables rapid recovery learning with minimal representational drift.