🤖 AI Summary
Existing data-driven approaches struggle to generate 3D scenes that simultaneously exhibit realistic complexity and physical plausibility due to the absence of structured training data and explicit physical modeling. This work proposes a geometry-aware diffusion model that jointly captures spatial layouts and physical relationships through a graph Transformer, conditioned on pose-augmented scene point clouds to guide the generation process. Furthermore, a differentiable physics-guided mechanism is introduced to enforce collision-free arrangements, semantic relationship constraints, and gravity consistency. By uniquely integrating geometry-aware diffusion with differentiable physics-based optimization, the method achieves state-of-the-art performance in both spatial-relational reasoning and physical plausibility metrics on 3D-FRONT and ProcTHOR, producing scenes that demonstrate remarkable stability and consistency before and after simulation.
📝 Abstract
Automated 3D scene generation is pivotal for applications spanning virtual reality, digital content creation, and Embodied AI. While computer graphics prioritizes aesthetic layouts, vision and robotics demand scenes that mirror real-world complexity which current data-driven methods struggle to achieve due to limited unstructured training data and insufficient spatial and physical modeling. We propose SPREAD, a diffusion-based framework that jointly learns spatial and physical relationships through a graph transformer, explicitly conditioning on posed scene point clouds for geometric awareness. Moreover, our model integrates differentiable guidance for collision avoidance, relational constraint, and gravity, ensuring physically coherent scenes without sacrificing relational context. Our experiments on 3D-FRONT and ProcTHOR datasets demonstrate state-of-the-art performance in spatial-relational reasoning and physical metrics. Moreover, \ours{} outperforms baselines in scene consistency and stability during pre- and post-physics simulation, proving its capability to generate simulation-ready environments for embodied AI agents.