🤖 AI Summary
Existing Negative Preference Optimization (NPO) methods rely on costly human preference annotations or external reward models, suffering from poor generalization and scalability. To address this, we propose Self-NPO—the first fully self-supervised NPO framework that eliminates the need for any human labels or external reward models. Self-NPO leverages the diffusion model’s own sampling process to generate “non-preferred” negative samples, and introduces a gradient-reversal-guided NPO loss compatible with Classifier-Free Guidance (CFG), enabling end-to-end negative preference learning. The method is plug-and-play and seamlessly integrates with mainstream diffusion models—including Stable Diffusion 1.5, SDXL, and CogVideoX. Extensive experiments demonstrate significant improvements in FID, CLIP Score, and human preference win rates, while requiring zero annotation effort and achieving efficient training. Self-NPO establishes a novel paradigm for alignment optimization under data-scarce conditions.
📝 Abstract
Diffusion models have demonstrated remarkable success in various visual generation tasks, including image, video, and 3D content generation. Preference optimization (PO) is a prominent and growing area of research that aims to align these models with human preferences. While existing PO methods primarily concentrate on producing favorable outputs, they often overlook the significance of classifier-free guidance (CFG) in mitigating undesirable results. Diffusion-NPO addresses this gap by introducing negative preference optimization (NPO), training models to generate outputs opposite to human preferences and thereby steering them away from unfavorable outcomes. However, prior NPO approaches, including Diffusion-NPO, rely on costly and fragile procedures for obtaining explicit preference annotations (e.g., manual pairwise labeling or reward model training), limiting their practicality in domains where such data are scarce or difficult to acquire. In this work, we introduce Self-NPO, a Negative Preference Optimization approach that learns exclusively from the model itself, thereby eliminating the need for manual data labeling or reward model training. Moreover, our method is highly efficient and does not require exhaustive data sampling. We demonstrate that Self-NPO integrates seamlessly into widely used diffusion models, including SD1.5, SDXL, and CogVideoX, as well as models already optimized for human preferences, consistently enhancing both their generation quality and alignment with human preferences.