Everything in Its Place: Benchmarking Spatial Intelligence of Text-to-Image Models

📅 2026-01-28
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited capability of current text-to-image models in modeling complex spatial relationships—such as positional arrangements, occlusion, and causality—highlighting that existing evaluation benchmarks fall short due to their reliance on short, information-sparse prompts. To bridge this gap, the authors introduce SpatialGenEval, a fine-grained evaluation framework spanning ten spatial subdomains, along with the SpatialT2I dataset comprising 1,230 information-dense long prompts and 15,400 human-verified image–text pairs. Through structured multiple-choice question-style prompts and fine-tuning experiments on models like Stable Diffusion-XL, they demonstrate that training on information-rich data significantly enhances spatial reasoning. Evaluations across 21 state-of-the-art models reveal persistent difficulties with higher-order spatial relations; however, fine-tuning on SpatialT2I yields consistent performance gains of 4.2%–5.7%, producing images more aligned with real-world spatial logic.

Technology Category

Application Category

📝 Abstract
Text-to-image (T2I) models have achieved remarkable success in generating high-fidelity images, but they often fail in handling complex spatial relationships, e.g., spatial perception, reasoning, or interaction. These critical aspects are largely overlooked by current benchmarks due to their short or information-sparse prompt design. In this paper, we introduce SpatialGenEval, a new benchmark designed to systematically evaluate the spatial intelligence of T2I models, covering two key aspects: (1) SpatialGenEval involves 1,230 long, information-dense prompts across 25 real-world scenes. Each prompt integrates 10 spatial sub-domains and corresponding 10 multi-choice question-answer pairs, ranging from object position and layout to occlusion and causality. Our extensive evaluation of 21 state-of-the-art models reveals that higher-order spatial reasoning remains a primary bottleneck. (2) To demonstrate that the utility of our information-dense design goes beyond simple evaluation, we also construct the SpatialT2I dataset. It contains 15,400 text-image pairs with rewritten prompts to ensure image consistency while preserving information density. Fine-tuned results on current foundation models (i.e., Stable Diffusion-XL, Uniworld-V1, OmniGen2) yield consistent performance gains (+4.2%, +5.7%, +4.4%) and more realistic effects in spatial relations, highlighting a data-centric paradigm to achieve spatial intelligence in T2I models.
Problem

Research questions and friction points this paper is trying to address.

spatial intelligence
text-to-image models
spatial reasoning
benchmark
complex spatial relationships
Innovation

Methods, ideas, or system contributions that make the work stand out.

spatial intelligence
text-to-image generation
benchmark
information-dense prompts
data-centric learning
🔎 Similar Papers
No similar papers found.
Z
Zengbin Wang
AMAP, Alibaba Group; Beijing University of Posts and Telecommunications
X
Xuecai Hu
AMAP, Alibaba Group
Yong Wang
Yong Wang
Academy of Mathematics and Systems Science, Chinese Academy of Sciences
OptimizationBioinformaticsSystems biologyComplex networkComputational Biology
Feng Xiong
Feng Xiong
Alibaba-inc
Computer Vision
Man Zhang
Man Zhang
北京邮电大学
X
Xiangxiang Chu
AMAP, Alibaba Group