Rethinking Preference Alignment for Diffusion Models with Classifier-Free Guidance

📅 2026-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the generalization gap in aligning large-scale text-to-image diffusion models with complex human preferences, a challenge often requiring costly retraining of foundational models. The authors propose a training-free preference alignment framework that decouples alignment into the sampling phase via classifier-free guidance (CFG). Their approach employs a contrastive guidance mechanism, training dual-module preference models on positive and negative preference samples separately, and dynamically synthesizing the difference between their predictions during inference to form a sharpened guidance vector. This method significantly enhances both controllability and generalization of preference alignment. Consistent quantitative and qualitative improvements are demonstrated on Stable Diffusion 1.5 and SDXL using the Pick-a-Pic v2 and HPDv3 benchmarks.

Technology Category

Application Category

📝 Abstract
Aligning large-scale text-to-image diffusion models with nuanced human preferences remains challenging. While direct preference optimization (DPO) is simple and effective, large-scale finetuning often shows a generalization gap. We take inspiration from test-time guidance and cast preference alignment as classifier-free guidance (CFG): a finetuned preference model acts as an external control signal during sampling. Building on this view, we propose a simple method that improves alignment without retraining the base model. To further enhance generalization, we decouple preference learning into two modules trained on positive and negative data, respectively, and form a \emph{contrastive guidance} vector at inference by subtracting their predictions (positive minus negative), scaled by a user-chosen strength and added to the base prediction at each step. This yields a sharper and controllable alignment signal. We evaluate on Stable Diffusion 1.5 and Stable Diffusion XL with Pick-a-Pic v2 and HPDv3, showing consistent quantitative and qualitative gains.
Problem

Research questions and friction points this paper is trying to address.

Preference Alignment
Diffusion Models
Classifier-Free Guidance
Generalization Gap
Text-to-Image Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

classifier-free guidance
preference alignment
contrastive guidance
diffusion models
test-time adaptation
🔎 Similar Papers
No similar papers found.