🤖 AI Summary
This work addresses two prevalent challenges in multi-instance text-to-image (T2I) generation: inaccurate modeling of inter-instance relationships and cross-instance attribute leakage. To this end, we propose a relation-aware disentangled learning framework. Methodologically, we introduce a Relation Attention mechanism that explicitly models action-based relationships among instances, guided by verb cues from the global text to generate relationship-aware visual features; additionally, we design a learnable disentanglement module to hierarchically separate and represent instance positions, attributes, and relational semantics. Evaluated on COCO-Position, COCO-MIG, and DrawBench benchmarks, our approach achieves significant improvements in positional accuracy, attribute fidelity, and relational semantic consistency—outperforming all existing state-of-the-art methods across all metrics.
📝 Abstract
With recent advancements in text-to-image (T2I) models, effectively generating multiple instances within a single image prompt has become a crucial challenge. Existing methods, while successful in generating positions of individual instances, often struggle to account for relationship discrepancy and multiple attributes leakage. To address these limitations, this paper proposes the relation-aware disentangled learning (RaDL) framework. RaDL enhances instance-specific attributes through learnable parameters and generates relation-aware image features via Relation Attention, utilizing action verbs extracted from the global prompt. Through extensive evaluations on benchmarks such as COCO-Position, COCO-MIG, and DrawBench, we demonstrate that RaDL outperforms existing methods, showing significant improvements in positional accuracy, multiple attributes consideration, and the relationships between instances. Our results present RaDL as the solution for generating images that consider both the relationships and multiple attributes of each instance within the multi-instance image.