🤖 AI Summary
MetaSpatial addresses two key limitations of vision-language models (VLMs): their lack of 3D spatial reasoning capability and the reliance of supervised fine-tuning on high-quality ground-truth annotations. To this end, it introduces the first reinforcement learning (RL)-driven framework tailored for metaverse applications. Methodologically, it proposes a multi-round self-iterative RL optimization scheme that jointly incorporates physics-aware constraint modeling and differentiable rendering-based evaluation, enabling real-time, hard-coded-free 3D layout generation and refinement. Its core contribution lies in eliminating dependence on ground-truth labels by establishing a self-reflective VLM reasoning architecture. Experiments demonstrate significant improvements in multi-scale spatial consistency, layout stability, and object placement plausibility. The framework is validated across AR/VR, digital twin, and game development tasks, confirming both effectiveness and generalizability.
📝 Abstract
We present MetaSpatial, the first reinforcement learning (RL)-based framework designed to enhance 3D spatial reasoning in vision-language models (VLMs), enabling real-time 3D scene generation without the need for hard-coded optimizations. MetaSpatial addresses two core challenges: (i) the lack of internalized 3D spatial reasoning in VLMs, which limits their ability to generate realistic layouts, and (ii) the inefficiency of traditional supervised fine-tuning (SFT) for layout generation tasks, as perfect ground truth annotations are unavailable. Our key innovation is a multi-turn RL-based optimization mechanism that integrates physics-aware constraints and rendered image evaluations, ensuring generated 3D layouts are coherent, physically plausible, and aesthetically consistent. Methodologically, MetaSpatial introduces an adaptive, iterative reasoning process, where the VLM refines spatial arrangements over multiple turns by analyzing rendered outputs, improving scene coherence progressively. Empirical evaluations demonstrate that MetaSpatial significantly enhances the spatial consistency and formatting stability of various scale models. Post-training, object placements are more realistic, aligned, and functionally coherent, validating the effectiveness of RL for 3D spatial reasoning in metaverse, AR/VR, digital twins, and game development applications. Our code, data, and training pipeline are publicly available at https://github.com/PzySeere/MetaSpatial.