🤖 AI Summary
To address the limited generalization of supervised fine-tuning (SFT) and the reward-model dependency, preference-data reliance, and overfitting risks of reinforcement learning (RL) in aligning diffusion-based visual generation with human intent, this paper proposes a reward-free, unpaired semi-policy preference optimization framework. Our method bridges the stability of SFT and the generalization capability of RL through three core innovations: (1) a novel semi-policy preference optimization paradigm that operates partially on-policy while retaining off-policy robustness; (2) a dynamic reference model selection mechanism to broaden policy-space exploration; and (3) an anchor-driven reference sample quality discrimination criterion to suppress low-quality samples. The approach jointly integrates preference optimization, diffusion model fine-tuning, reference distillation, and anchor-based contrastive learning—eliminating the need for explicit reward modeling or human-annotated preference pairs. Extensive experiments on text-to-image and text-to-video benchmarks demonstrate consistent superiority over SFT, RLHF, and existing reward-free methods, significantly improving both generation fidelity and alignment with human preferences.
📝 Abstract
Reinforcement learning from human feedback (RLHF) methods are emerging as a way to fine-tune diffusion models (DMs) for visual generation. However, commonly used on-policy strategies are limited by the generalization capability of the reward model, while off-policy approaches require large amounts of difficult-to-obtain paired human-annotated data, particularly in visual generation tasks. To address the limitations of both on- and off-policy RLHF, we propose a preference optimization method that aligns DMs with preferences without relying on reward models or paired human-annotated data. Specifically, we introduce a Semi-Policy Preference Optimization (SePPO) method. SePPO leverages previous checkpoints as reference models while using them to generate on-policy reference samples, which replace"losing images"in preference pairs. This approach allows us to optimize using only off-policy"winning images."Furthermore, we design a strategy for reference model selection that expands the exploration in the policy space. Notably, we do not simply treat reference samples as negative examples for learning. Instead, we design an anchor-based criterion to assess whether the reference samples are likely to be winning or losing images, allowing the model to selectively learn from the generated reference samples. This approach mitigates performance degradation caused by the uncertainty in reference sample quality. We validate SePPO across both text-to-image and text-to-video benchmarks. SePPO surpasses all previous approaches on the text-to-image benchmarks and also demonstrates outstanding performance on the text-to-video benchmarks. Code will be released in https://github.com/DwanZhang-AI/SePPO.