🤖 AI Summary
This work addresses the limitations of softmax reweighting in diffusion-based policy reinforcement learning, which often leads to excessive greediness and ineffective utilization of negative feedback. To overcome these issues, the authors propose the Signed Measure Policy Optimization (SiMPO) framework, which employs a two-stage measure-matching procedure: first constructing a virtual target policy regularized by an f-divergence and relaxing the non-negativity constraint to admit signed measures, then using this signed measure to guide reweighting in diffusion or flow models. SiMPO accommodates any monotonically increasing weight function, providing both theoretical grounding and geometric intuition for negative reweighting, thereby elucidating its mechanism for repelling suboptimal actions. Experiments demonstrate that SiMPO significantly outperforms existing methods across diverse tasks and offers practical guidance for designing reweighting schemes under arbitrary reward distributions.
📝 Abstract
A commonly used family of RL algorithms for diffusion policies conducts softmax reweighting over the behavior policy, which usually induces an over-greedy policy and fails to leverage feedback from negative samples. In this work, we introduce Signed Measure Policy Optimization (SiMPO), a simple and unified framework that generalizes reweighting scheme in diffusion RL with general monotonic functions. SiMPO revisits diffusion RL via a two-stage measure matching lens. First, we construct a virtual target policy by $f$-divergence regularized policy optimization, where we can relax the non-negativity constraint to allow for a signed target measure. Second, we use this signed measure to guide diffusion or flow models through reweighted matching. This formulation offers two key advantages: a) it generalizes to arbitrary monotonically increasing weighting functions; and b) it provides a principled justification and practical guidance for negative reweighting. Furthermore, we provide geometric interpretations to illustrate how negative reweighting actively repels the policy from suboptimal actions. Extensive empirical evaluations demonstrate that SiMPO achieves superior performance by leveraging these flexible weighting schemes, and we provide practical guidelines for selecting reweighting methods tailored to the reward landscape.