MALeR: Improving Compositional Fidelity in Layout-Guided Generation

📅 2025-11-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In layout-guided text-to-image generation, compositional fidelity remains low for multi-object, multi-attribute scenes—manifesting as object boundary violations, cross-object attribute leakage, and image distribution distortion. To address this, we propose the Masked Attribute-Aware Binding (MAAB) mechanism, which enforces strict spatial adherence via layout masks while explicitly modeling fine-grained attribute-to-object binding at the feature level. Integrated into a text-to-image diffusion framework, MAAB employs hierarchical conditional control to jointly govern layout structure and semantic attribution. Extensive experiments on complex compositional benchmarks demonstrate significant improvements: +12.3% in compositional accuracy, +18.7% in attribute binding correctness, and +9.5% in layout consistency over prior state-of-the-art methods. Both quantitative metrics and qualitative visual results confirm MAAB’s superiority in preserving structural integrity and semantic precision.

Technology Category

Application Category

📝 Abstract
Recent advances in text-to-image models have enabled a new era of creative and controllable image generation. However, generating compositional scenes with multiple subjects and attributes remains a significant challenge. To enhance user control over subject placement, several layout-guided methods have been proposed. However, these methods face numerous challenges, particularly in compositional scenes. Unintended subjects often appear outside the layouts, generated images can be out-of-distribution and contain unnatural artifacts, or attributes bleed across subjects, leading to incorrect visual outputs. In this work, we propose MALeR, a method that addresses each of these challenges. Given a text prompt and corresponding layouts, our method prevents subjects from appearing outside the given layouts while being in-distribution. Additionally, we propose a masked, attribute-aware binding mechanism that prevents attribute leakage, enabling accurate rendering of subjects with multiple attributes, even in complex compositional scenes. Qualitative and quantitative evaluation demonstrates that our method achieves superior performance in compositional accuracy, generation consistency, and attribute binding compared to previous work. MALeR is particularly adept at generating images of scenes with multiple subjects and multiple attributes per subject.
Problem

Research questions and friction points this paper is trying to address.

Preventing unintended subjects outside specified layouts
Eliminating attribute leakage across multiple subjects
Generating in-distribution images without unnatural artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prevents subjects appearing outside given layouts
Uses masked attribute-aware binding mechanism
Prevents attribute leakage across multiple subjects
S
Shivank Saxena
CVIT, IIIT Hyderabad, India
D
Dhruv Srivastava
CVIT, IIIT Hyderabad and Adobe Research, India
Makarand Tapaswi
Makarand Tapaswi
IIIT Hyderabad, Wadhwani AI
AI for Social GoodStory UnderstandingVision and LanguageComputer VisionNLP