🤖 AI Summary
In text-to-image (T2I) generation, accurately modeling spatial relationships described in natural language prompts remains challenging. This paper proposes a lightweight, portable structured information injection method: fine-tuning a language model to automatically parse raw prompts into semantic tuples explicitly encoding spatial relations, then seamlessly integrating these tuples into mainstream T2I pipelines. The approach eliminates manual prompt engineering—automatically generated tuples achieve quality comparable to hand-crafted ones. Crucially, it enhances spatial layout accuracy without degrading overall image fidelity. Experiments demonstrate substantial improvements over baselines on spatial relationship evaluation metrics, while maintaining stable Inception Score—confirming the method’s effectiveness, robustness, and plug-and-play compatibility with existing diffusion-based or autoregressive T2I frameworks.
📝 Abstract
Text-to-image (T2I) generation has advanced rapidly, yet faithfully capturing spatial relationships described in natural language prompts remains a major challenge. Prior efforts have addressed this issue through prompt optimization, spatially grounded generation, and semantic refinement. This work introduces a lightweight approach that augments prompts with tuple-based structured information, using a fine-tuned language model for automatic conversion and seamless integration into T2I pipelines. Experimental results demonstrate substantial improvements in spatial accuracy, without compromising overall image quality as measured by Inception Score. Furthermore, the automatically generated tuples exhibit quality comparable to human-crafted tuples. This structured information provides a practical and portable solution to enhance spatial relationships in T2I generation, addressing a key limitation of current large-scale generative systems.