Agentic 3D Scene Generation with Spatially Contextualized VLMs

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) exhibit limited capability in structured 3D scene understanding and generation, hindering their deployment in spatially grounded tasks such as embodied AI and immersive interaction. To address this, we propose the *spatial context ternary structure*—a dynamic, geometry-aware working memory that jointly encodes semantic blueprints, geometric point clouds, and relational hypergraphs, enabling iterative read-write access by VLMs. Building upon this, we introduce the first agent-based 3D generation paradigm, supporting end-to-end environment synthesis, geometrically consistent asset generation, hypergraph-guided human-factor optimization, and automated verification. Experiments demonstrate substantial improvements in generalization and controllability for complex 3D scenes, effectively enabling downstream tasks including interactive editing and path planning. Our approach advances practical applicability across computer graphics, 3D vision, and embodied AI.

Technology Category

Application Category

📝 Abstract
Despite recent advances in multimodal content generation enabled by vision-language models (VLMs), their ability to reason about and generate structured 3D scenes remains largely underexplored. This limitation constrains their utility in spatially grounded tasks such as embodied AI, immersive simulations, and interactive 3D applications. We introduce a new paradigm that enables VLMs to generate, understand, and edit complex 3D environments by injecting a continually evolving spatial context. Constructed from multimodal input, this context consists of three components: a scene portrait that provides a high-level semantic blueprint, a semantically labeled point cloud capturing object-level geometry, and a scene hypergraph that encodes rich spatial relationships, including unary, binary, and higher-order constraints. Together, these components provide the VLM with a structured, geometry-aware working memory that integrates its inherent multimodal reasoning capabilities with structured 3D understanding for effective spatial reasoning. Building on this foundation, we develop an agentic 3D scene generation pipeline in which the VLM iteratively reads from and updates the spatial context. The pipeline features high-quality asset generation with geometric restoration, environment setup with automatic verification, and ergonomic adjustment guided by the scene hypergraph. Experiments show that our framework can handle diverse and challenging inputs, achieving a level of generalization not observed in prior work. Further results demonstrate that injecting spatial context enables VLMs to perform downstream tasks such as interactive scene editing and path planning, suggesting strong potential for spatially intelligent systems in computer graphics, 3D vision, and embodied applications.
Problem

Research questions and friction points this paper is trying to address.

Enabling VLMs to generate and understand structured 3D scenes
Integrating multimodal reasoning with 3D spatial context
Developing agentic pipelines for interactive 3D scene editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLMs generate 3D scenes with spatial context
Scene hypergraph encodes rich spatial relationships
Agentic pipeline iteratively updates spatial context
🔎 Similar Papers
No similar papers found.