๐ค AI Summary
This work addresses the challenge in reinforcement learning (RL) for verifiable domains such as code generation and mathematical reasoning, where reliance on sparse scalar rewards impedes effective credit assignment despite abundant textual feedback from the environment. To overcome this limitation, the authors propose Self-Distillation Policy Optimization (SDPO), a novel approach that leverages the modelโs own predictions of corrective tokens from textual feedback to construct dense supervision signals. These signals are then distilled back into the policy network via a self-teaching mechanism, enabling context-aware self-correction without external teachers or explicit reward models. SDPO is the first method to formally treat textual feedback as an RL signal, significantly outperforming existing RLVR approaches on scientific reasoning, tool usage, and competitive programming tasks in LiveCodeBench v6. At inference time, applying SDPO to a single problem reduces the number of attempts needed to achieve comparable success rates by a factor of three.
๐ Abstract
Large language models are increasingly post-trained with reinforcement learning in verifiable domains such as code and math. Yet, current methods for reinforcement learning with verifiable rewards (RLVR) learn only from a scalar outcome reward per attempt, creating a severe credit-assignment bottleneck. Many verifiable environments actually provide rich textual feedback, such as runtime errors or judge evaluations, that explain why an attempt failed. We formalize this setting as reinforcement learning with rich feedback and introduce Self-Distillation Policy Optimization (SDPO), which converts tokenized feedback into a dense learning signal without any external teacher or explicit reward model. SDPO treats the current model conditioned on feedback as a self-teacher and distills its feedback-informed next-token predictions back into the policy. In this way, SDPO leverages the model's ability to retrospectively identify its own mistakes in-context. Across scientific reasoning, tool use, and competitive programming on LiveCodeBench v6, SDPO improves sample efficiency and final accuracy over strong RLVR baselines. Notably, SDPO also outperforms baselines in standard RLVR environments that only return scalar feedback by using successful rollouts as implicit feedback for failed attempts. Finally, applying SDPO to individual questions at test time accelerates discovery on difficult binary-reward tasks, achieving the same discovery probability as best-of-k sampling or multi-turn conversations with 3x fewer attempts.