🤖 AI Summary
Text-to-image diffusion models still exhibit significant limitations in accurately following natural language instructions—particularly those specifying spatial relationships among objects. To address this, we propose Iterative Prompt Re-annotation (IPR): a method that identifies image-text mismatches from generated samples, then dynamically refines textual prompts using cross-modal matching scores and classifier-based feedback. Crucially, IPR integrates this iterative optimization directly into the diffusion training pipeline without requiring reinforcement learning (RL). This marks the first RL-free approach to instruction alignment, avoiding the high variance and training instability inherent in RL-based methods. We validate IPR on Stable Diffusion v2 and SDXL architectures, achieving a 15.22% absolute improvement on the spatial-relation benchmark VISOR—substantially outperforming existing RL baselines. The implementation is publicly available.
📝 Abstract
Diffusion models have shown impressive performance in many domains. However, the model's capability to follow natural language instructions (e.g., spatial relationships between objects, generating complex scenes) is still unsatisfactory. In this work, we propose Iterative Prompt Relabeling (IPR), a novel algorithm that aligns images to text through iterative image sampling and prompt relabeling with feedback. IPR first samples a batch of images conditioned on the text, then relabels the text prompts of unmatched text-image pairs with classifier feedback. We conduct thorough experiments on SDv2 and SDXL, testing their capability to follow instructions on spatial relations. With IPR, we improved up to 15.22% (absolute improvement) on the challenging spatial relation VISOR benchmark, demonstrating superior performance compared to previous RL methods. Our code is publicly available at https://github.com/xinyan-cxy/IPR-RLDF.