🤖 AI Summary
Multimodal large language models (MLLMs) exhibit insufficient fine-grained visual understanding in text-to-image generation, particularly regarding object-level layout, attribute consistency, and relational reasoning.
Method: We propose an object-centric self-optimizing preference learning framework that requires no external data or models. It introduces novel object-level prompt perturbation and compositional prompt density enhancement, coupled with VQA-driven automatic scoring and filtering to autonomously construct high-quality, highly discriminative object-level contrastive preference pairs—effectively eliminating ambiguous and imbalanced samples.
Contribution/Results: Through multi-step end-to-end preference optimization, our method significantly improves object localization accuracy, attribute fidelity, and relational modeling capability. It achieves state-of-the-art performance across three compositional text-to-image benchmarks, demonstrating the effectiveness and scalability of the data-self-generating preference optimization paradigm.
📝 Abstract
Recent advancements in Multimodal Large Language Models (MLLMs) have significantly improved both image understanding and generation capabilities. Despite these improvements, MLLMs still struggle with fine-grained visual comprehension, particularly in text-to-image generation tasks. While preference optimization methods have been explored to address these limitations in image understanding tasks, their application to image generation remains largely underexplored. To address this gap, we propose an Object-centric Self-improving Preference Optimization (OSPO) framework designed for text-to-image generation by MLLMs. OSPO leverages the intrinsic reasoning abilities of MLLMs without requiring any external datasets or models. OSPO emphasizes the importance of high-quality preference pair data, which is critical for effective preference optimization. To achieve this, it introduces a self-improving mechanism that autonomously constructs object-level contrastive preference pairs through object-centric prompt perturbation, densification and VQA scoring. This process eliminates ambiguous or disproportionate variations commonly found in naively generated preference pairs, thereby enhancing the effectiveness of preference optimization. We validate OSPO on three representative compositional text-to-image benchmarks, demonstrating substantial performance gains over baseline models.