🤖 AI Summary
To address key challenges in open-vocabulary layout-to-image (L2I) generation—including weak layout representation, training-evaluation misalignment, and the absence of reliable evaluation metrics—this paper introduces a Region-Cross-Attention module to enhance fine-grained layout modeling. We propose the first open-vocabulary dual metric—Layout Fidelity and Semantic Consistency—and validate its strong correlation with human preferences (ρ > 0.92) via large-scale user studies. Furthermore, we adopt an open-vocabulary training paradigm to improve generalization. Experiments demonstrate that our method significantly outperforms state-of-the-art approaches in both layout fidelity and image quality under complex text prompts, achieving consistent improvements across multiple benchmarks.
📝 Abstract
Recent advancements in generative models have significantly enhanced their capacity for image generation, enabling a wide range of applications such as image editing, completion and video editing. A specialized area within generative modeling is layout-to-image (L2I) generation, where predefined layouts of objects guide the generative process. In this study, we introduce a novel regional cross-attention module tailored to enrich layout-to-image generation. This module notably improves the representation of layout regions, particularly in scenarios where existing methods struggle with highly complex and detailed textual descriptions. Moreover, while current open-vocabulary L2I methods are trained in an open-set setting, their evaluations often occur in closed-set environments. To bridge this gap, we propose two metrics to assess L2I performance in open-vocabulary scenarios. Additionally, we conduct a comprehensive user study to validate the consistency of these metrics with human preferences.