🤖 AI Summary
Current text-to-image (T2I) models exhibit significant performance degradation when generating images from non-camera-centric spatial descriptions (e.g., “to the left” or “directly ahead”), primarily due to the neglect of Frame of Reference (FoR) modeling. This work introduces, for the first time, explicit FoR modeling into multimodal language–vision frameworks. We propose a spatially aware text–image alignment and self-correction framework: a vision module parses image spatial structure; textual spatial expressions are uniformly mapped to the camera coordinate system; and directional and depth relationships are jointly optimized in latent space. To rigorously evaluate FoR consistency, we construct two dedicated benchmarks and design a unified viewpoint alignment metric. Experiments demonstrate that our method achieves up to a 5.3% improvement in single-step correction over state-of-the-art T2I models, substantially enhancing their capacity to interpret and generate spatial descriptions grounded in explicit reference frames.
📝 Abstract
Frame of Reference (FoR) is a fundamental concept in spatial reasoning that humans utilize to comprehend and describe space. With the rapid progress in Multimodal Language models, the moment has come to integrate this long-overlooked dimension into these models. In particular, in text-to-image (T2I) generation, even state-of-the-art models exhibit a significant performance gap when spatial descriptions are provided from perspectives other than the camera. To address this limitation, we propose Frame of Reference-guided Spatial Adjustment in LLM-based Diffusion Editing (FoR-SALE), an extension of the Self-correcting LLM-controlled Diffusion (SLD) framework for T2I. For-Sale evaluates the alignment between a given text and an initially generated image, and refines the image based on the Frame of Reference specified in the spatial expressions. It employs vision modules to extract the spatial configuration of the image, while simultaneously mapping the spatial expression to a corresponding camera perspective. This unified perspective enables direct evaluation of alignment between language and vision. When misalignment is detected, the required editing operations are generated and applied. FoR-SALE applies novel latent-space operations to adjust the facing direction and depth of the generated images. We evaluate FoR-SALE on two benchmarks specifically designed to assess spatial understanding with FoR. Our framework improves the performance of state-of-the-art T2I models by up to 5.3% using only a single round of correction.