🤖 AI Summary
This work proposes CRAFT, a lightweight and efficient fine-tuning paradigm for aligning diffusion models without relying on large-scale preference data or high-quality images, both of which incur substantial computational costs. CRAFT constructs a high-quality training set via Composite Reward Filtering (CRF) and integrates it with an enhanced supervised fine-tuning approach to achieve effective alignment. Theoretical analysis establishes a connection between supervised fine-tuning and population-based reinforcement learning, enabling the derivation of a tightened lower bound on the optimization strategy. Remarkably, CRAFT achieves significant performance gains over existing methods that require thousands of preference pairs, using only around 100 samples. It also demonstrates 11- to 220-fold faster convergence, offering both high efficiency and strong scalability.
📝 Abstract
Aligning Diffusion models has achieved remarkable breakthroughs in generating high-quality, human preference-aligned images. Existing techniques, such as supervised fine-tuning (SFT) and DPO-style preference optimization, have become principled tools for fine-tuning diffusion models. However, SFT relies on high-quality images that are costly to obtain, while DPO-style methods depend on large-scale preference datasets, which are often inconsistent in quality. Beyond data dependency, these methods are further constrained by computational inefficiency. To address these two challenges, we propose Composite Reward Assisted Fine-Tuning (CRAFT), a lightweight yet powerful fine-tuning paradigm that requires significantly reduced training data while maintaining computational efficiency. It first leverages a Composite Reward Filtering (CRF) technique to construct a high-quality and consistent training dataset and then perform an enhanced variant of SFT. We also theoretically prove that CRAFT actually optimizes the lower bound of group-based reinforcement learning, establishing a principled connection between SFT with selected data and reinforcement learning. Our extensive empirical results demonstrate that CRAFT with only 100 samples can easily outperform recent SOTA preference optimization methods with thousands of preference-paired samples. Moreover, CRAFT can even achieve 11-220$\times$ faster convergences than the baseline preference optimization methods, highlighting its extremely high efficiency.