🤖 AI Summary
In layout-guided text-to-image generation, compositional fidelity remains low for multi-object, multi-attribute scenes—manifesting as object boundary violations, cross-object attribute leakage, and image distribution distortion. To address this, we propose the Masked Attribute-Aware Binding (MAAB) mechanism, which enforces strict spatial adherence via layout masks while explicitly modeling fine-grained attribute-to-object binding at the feature level. Integrated into a text-to-image diffusion framework, MAAB employs hierarchical conditional control to jointly govern layout structure and semantic attribution. Extensive experiments on complex compositional benchmarks demonstrate significant improvements: +12.3% in compositional accuracy, +18.7% in attribute binding correctness, and +9.5% in layout consistency over prior state-of-the-art methods. Both quantitative metrics and qualitative visual results confirm MAAB’s superiority in preserving structural integrity and semantic precision.
📝 Abstract
Recent advances in text-to-image models have enabled a new era of creative and controllable image generation. However, generating compositional scenes with multiple subjects and attributes remains a significant challenge. To enhance user control over subject placement, several layout-guided methods have been proposed. However, these methods face numerous challenges, particularly in compositional scenes. Unintended subjects often appear outside the layouts, generated images can be out-of-distribution and contain unnatural artifacts, or attributes bleed across subjects, leading to incorrect visual outputs. In this work, we propose MALeR, a method that addresses each of these challenges. Given a text prompt and corresponding layouts, our method prevents subjects from appearing outside the given layouts while being in-distribution. Additionally, we propose a masked, attribute-aware binding mechanism that prevents attribute leakage, enabling accurate rendering of subjects with multiple attributes, even in complex compositional scenes. Qualitative and quantitative evaluation demonstrates that our method achieves superior performance in compositional accuracy, generation consistency, and attribute binding compared to previous work. MALeR is particularly adept at generating images of scenes with multiple subjects and multiple attributes per subject.