🤖 AI Summary
Existing 3D scene generation methods rely on fixed pipelines, struggling to efficiently produce diverse environments that are physically plausible, semantically coherent, and directly simulation-ready—thus hindering scalable training of embodied agents. This work proposes SAGE, a novel framework that introduces, for the first time, an embodied task-driven agent architecture. By interpreting task intent, SAGE adaptively orchestrates layout and object generators and employs multimodal critics—semantic, visual, and physical—to iteratively refine scenes through self-reflective optimization, enabling end-to-end generation of simulation-ready 3D environments. The resulting SAGE-10k dataset substantially improves policy training performance and demonstrates strong generalization and scalability.
📝 Abstract
Real-world data collection for embodied agents remains costly and unsafe, calling for scalable, realistic, and simulator-ready 3D environments. However, existing scene-generation systems often rely on rule-based or task-specific pipelines, yielding artifacts and physically invalid scenes. We present SAGE, an agentic framework that, given a user-specified embodied task (e.g.,"pick up a bowl and place it on the table"), understands the intent and automatically generates simulation-ready environments at scale. The agent couples multiple generators for layout and object composition with critics that evaluate semantic plausibility, visual realism, and physical stability. Through iterative reasoning and adaptive tool selection, it self-refines the scenes until meeting user intent and physical validity. The resulting environments are realistic, diverse, and directly deployable in modern simulators for policy training. Policies trained purely on this data exhibit clear scaling trends and generalize to unseen objects and layouts, demonstrating the promise of simulation-driven scaling for embodied AI. Code, demos, and the SAGE-10k dataset can be found on the project page here: https://nvlabs.github.io/sage.