🤖 AI Summary
This work addresses two key bottlenecks in controllable text-to-image generation: (1) ControlNet++’s restriction to low-noise timesteps, leading to insufficient exploitation of temporal information; and (2) the susceptibility of direct image-level Direct Preference Optimization (DPO) to generation uncertainty, hindering disentanglement of controllability from image fidelity. To this end, we propose Conditional Preference Optimization (CPO), the first method to transfer preference learning into the control-signal space. CPO performs end-to-end training by contrasting control conditions—not generated images—enabling joint optimization across the full denoising trajectory (including both high- and low-noise timesteps). This design substantially reduces training variance, eliminates confounding effects from image quality, and lowers annotation overhead. Experiments demonstrate that CPO consistently outperforms ControlNet++ on segmentation, pose, edge, and depth control tasks, reducing error rates by over 10%, 70–80%, and 2–5%, respectively.
📝 Abstract
To enhance controllability in text-to-image generation, ControlNet introduces image-based control signals, while ControlNet++ improves pixel-level cycle consistency between generated images and the input control signal. To avoid the prohibitive cost of back-propagating through the sampling process, ControlNet++ optimizes only low-noise timesteps (e.g., $t<200$) using a single-step approximation, which not only ignores the contribution of high-noise timesteps but also introduces additional approximation errors. A straightforward alternative for optimizing controllability across all timesteps is Direct Preference Optimization (DPO), a fine-tuning method that increases model preference for more controllable images ($I^{w}$) over less controllable ones ($I^{l}$). However, due to uncertainty in generative models, it is difficult to ensure that win--lose image pairs differ only in controllability while keeping other factors, such as image quality, fixed. To address this, we propose performing preference learning over control conditions rather than generated images. Specifically, we construct winning and losing control signals, $mathbf{c}^{w}$ and $mathbf{c}^{l}$, and train the model to prefer $mathbf{c}^{w}$. This method, which we term extit{Condition Preference Optimization} (CPO), eliminates confounding factors and yields a low-variance training objective. Our approach theoretically exhibits lower contrastive loss variance than DPO and empirically achieves superior results. Moreover, CPO requires less computation and storage for dataset curation. Extensive experiments show that CPO significantly improves controllability over the state-of-the-art ControlNet++ across multiple control types: over $10%$ error rate reduction in segmentation, $70$--$80%$ in human pose, and consistent $2$--$5%$ reductions in edge and depth maps.