🤖 AI Summary
Existing conditional image generation methods struggle with dual conflicts between text prompts and conditioning images: semantic inconsistencies at the input level and alignment degradation induced by model biases. To address this, we propose a bidirectional decoupled Direct Preference Optimization (DPO) framework that jointly optimizes text fidelity and conditional alignment via gradient decoupling, adaptive loss balancing, and a conflict-aware data generation pipeline. We introduce DualAlign—a novel multi-condition alignment benchmark—and develop an iterative optimization protocol based on COCO. Experiments demonstrate substantial improvements in text success rate (+35%) and condition adherence, while maintaining strong generalization and robustness across diverse prompts and conditions. Our approach establishes a new paradigm for modeling and resolving conflicts in multimodal conditional generation.
📝 Abstract
Conditional image generation enhances text-to-image synthesis with structural, spatial, or stylistic priors, but current methods face challenges in handling conflicts between sources. These include 1) input-level conflicts, where the conditioning image contradicts the text prompt, and 2) model-bias conflicts, where generative biases disrupt alignment even when conditions match the text. Addressing these conflicts requires nuanced solutions, which standard supervised fine-tuning struggles to provide. Preference-based optimization techniques like Direct Preference Optimization (DPO) show promise but are limited by gradient entanglement between text and condition signals and lack disentangled training data for multi-constraint tasks. To overcome this, we propose a bidirectionally decoupled DPO framework (BideDPO). Our method creates two disentangled preference pairs-one for the condition and one for the text-to reduce gradient entanglement. The influence of pairs is managed using an Adaptive Loss Balancing strategy for balanced optimization. We introduce an automated data pipeline to sample model outputs and generate conflict-aware data. This process is embedded in an iterative optimization strategy that refines both the model and the data. We construct a DualAlign benchmark to evaluate conflict resolution between text and condition. Experiments show BideDPO significantly improves text success rates (e.g., +35%) and condition adherence. We also validate our approach using the COCO dataset. Project Pages: https://limuloo.github.io/BideDPO/.