CPO: Condition Preference Optimization for Controllable Image Generation

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two key bottlenecks in controllable text-to-image generation: (1) ControlNet++’s restriction to low-noise timesteps, leading to insufficient exploitation of temporal information; and (2) the susceptibility of direct image-level Direct Preference Optimization (DPO) to generation uncertainty, hindering disentanglement of controllability from image fidelity. To this end, we propose Conditional Preference Optimization (CPO), the first method to transfer preference learning into the control-signal space. CPO performs end-to-end training by contrasting control conditions—not generated images—enabling joint optimization across the full denoising trajectory (including both high- and low-noise timesteps). This design substantially reduces training variance, eliminates confounding effects from image quality, and lowers annotation overhead. Experiments demonstrate that CPO consistently outperforms ControlNet++ on segmentation, pose, edge, and depth control tasks, reducing error rates by over 10%, 70–80%, and 2–5%, respectively.

Technology Category

Application Category

📝 Abstract
To enhance controllability in text-to-image generation, ControlNet introduces image-based control signals, while ControlNet++ improves pixel-level cycle consistency between generated images and the input control signal. To avoid the prohibitive cost of back-propagating through the sampling process, ControlNet++ optimizes only low-noise timesteps (e.g., $t<200$) using a single-step approximation, which not only ignores the contribution of high-noise timesteps but also introduces additional approximation errors. A straightforward alternative for optimizing controllability across all timesteps is Direct Preference Optimization (DPO), a fine-tuning method that increases model preference for more controllable images ($I^{w}$) over less controllable ones ($I^{l}$). However, due to uncertainty in generative models, it is difficult to ensure that win--lose image pairs differ only in controllability while keeping other factors, such as image quality, fixed. To address this, we propose performing preference learning over control conditions rather than generated images. Specifically, we construct winning and losing control signals, $mathbf{c}^{w}$ and $mathbf{c}^{l}$, and train the model to prefer $mathbf{c}^{w}$. This method, which we term extit{Condition Preference Optimization} (CPO), eliminates confounding factors and yields a low-variance training objective. Our approach theoretically exhibits lower contrastive loss variance than DPO and empirically achieves superior results. Moreover, CPO requires less computation and storage for dataset curation. Extensive experiments show that CPO significantly improves controllability over the state-of-the-art ControlNet++ across multiple control types: over $10%$ error rate reduction in segmentation, $70$--$80%$ in human pose, and consistent $2$--$5%$ reductions in edge and depth maps.
Problem

Research questions and friction points this paper is trying to address.

Optimizing controllability across all timesteps in text-to-image generation
Eliminating confounding factors in preference learning for controllable generation
Reducing computational cost while improving pixel-level cycle consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes control conditions instead of generated images
Constructs winning and losing control signal pairs
Reduces variance in training objective theoretically and empirically
🔎 Similar Papers
No similar papers found.
Zonglin Lyu
Zonglin Lyu
University of Central Florida
Computer VisionMultimodal LearningArtificial IntelligenceMachine LearningGenerative Models
M
Ming Li
Institute of Artificial Intelligence, University of Central Florida, Orlando, FL 32816
X
Xinxin Liu
Institute of Artificial Intelligence, University of Central Florida, Orlando, FL 32816
C
Chen Chen
Institute of Artificial Intelligence, University of Central Florida, Orlando, FL 32816