🤖 AI Summary
This work investigates whether mainstream 2D foundation image models implicitly possess the capacity to model 3D worlds and introduces a multi-agent framework for generating high-quality, geometrically consistent 3D scenes. The proposed approach uniquely organizes 2D image generation models as agents, coordinated by a vision-language model (VLM) acting as a “director” that produces viewpoint-specific prompts to guide image generators in synthesizing novel views. A two-stage VLM-based verifier jointly evaluates outputs in both the 2D image space and the reconstructed 3D space to ensure consistency and fidelity. Experiments demonstrate that the framework produces photorealistic, geometrically coherent 3D scenes amenable to interactive exploration, revealing that 2D foundation models indeed encode meaningful 3D priors despite being trained solely on 2D data.
📝 Abstract
Given the remarkable ability of 2D foundation image models to generate high-fidelity outputs, we investigate a fundamental question: do 2D foundation image models inherently possess 3D world model capabilities? To answer this, we systematically evaluate multiple state-of-the-art image generation models and Vision-Language Models (VLMs) on the task of 3D world synthesis. To harness and benchmark their potential implicit 3D capability, we propose an agentic framing to facilitate 3D world generation. Our approach employs a multi-agent architecture: a VLM-based director that formulates prompts to guide image synthesis, a generator that synthesizes new image views, and a VLM-backed two-step verifier that evaluates and selectively curates generated frames from both 2D image and 3D reconstruction space. Crucially, we demonstrate that our agentic approach provides coherent and robust 3D reconstruction, producing output scenes that can be explored by rendering novel views. Through extensive experiments across various foundation models, we demonstrate that 2D models do indeed encapsulate a grasp of 3D worlds. By exploiting this understanding, our method successfully synthesizes expansive, realistic, and 3D-consistent worlds.