🤖 AI Summary
This work addresses the limited spatial understanding and layout consistency of current large language models and vision-language models in fine-grained visual editing. To overcome this, the authors propose a structured reasoning framework that explicitly models scene graph relationships to enable controllable and interpretable spatial layout editing guided by natural language instructions. The approach integrates scene graph representations, structured relational reasoning, and language guidance within a contrastive training paradigm, surpassing the limitations of conventional end-to-end methods and chain-of-thought supervised fine-tuning or GRPO strategies. Evaluated on a newly introduced benchmark for text-guided layout editing, the method achieves a 15% improvement in average IoU, reduces center distance error by 25%, and outperforms zero-shot state-of-the-art large language models by 20% in mIoU.
📝 Abstract
Large Language Models (LLMs) and Vision Language Models (VLMs) have shown impressive reasoning abilities, yet they struggle with spatial understanding and layout consistency when performing fine-grained visual editing. We introduce a Structured Reasoning framework that performs text-conditioned spatial layout editing via scene-graph reasoning. Given an input scene graph and a natural-language instruction, the model reasons over the graph to generate an updated scene graph that satisfies the text condition while maintaining spatial coherence. By explicitly guiding the reasoning process through structured relational representations, our approach improves both interpretability and control over spatial relationships. We evaluate our method on a new text-guided layout editing benchmark encompassing sorting, spatial alignment, and room-editing tasks. Our training paradigm yields an average 15% improvement in IoU and 25% reduction in center-distance error compared to Chain of Thought Fine-tuning (CoT-SFT) and vanilla GRPO baselines. Compared to SOTA zero-shot LLMs, our best models achieve up to 20% higher mIoU, demonstrating markedly improved spatial precision.