RAISECity: A Multimodal Agent Framework for Reality-Aligned 3D World Generation at City-Scale

📅 2025-11-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches to city-scale 3D world generation struggle to simultaneously achieve geometric accuracy, texture fidelity, visual realism, and scalability. This paper proposes a multimodal agent framework that integrates large language models, vision foundation models, and graphics pipelines. It enables end-to-end high-fidelity urban scene generation through dynamic data processing, iterative self-reflection optimization, and coordinated multimodal tool invocation. A key innovation is the introduction of an intermediate representation maintenance mechanism, which effectively mitigates error accumulation and ensures strong alignment of generated outputs with real-world geographic, semantic, and visual characteristics. Experiments demonstrate that our method significantly outperforms state-of-the-art methods in shape accuracy, texture quality, and overall perceptual realism; user studies show a preference win rate exceeding 90%.

Technology Category

Application Category

📝 Abstract
City-scale 3D generation is of great importance for the development of embodied intelligence and world models. Existing methods, however, face significant challenges regarding quality, fidelity, and scalability in 3D world generation. Thus, we propose RAISECity, a extbf{R}eality- extbf{A}ligned extbf{I}ntelligent extbf{S}ynthesis extbf{E}ngine that creates detailed, extbf{C}ity-scale 3D worlds. We introduce an agentic framework that leverages diverse multimodal foundation tools to acquire real-world knowledge, maintain robust intermediate representations, and construct complex 3D scenes. This agentic design, featuring dynamic data processing, iterative self-reflection and refinement, and the invocation of advanced multimodal tools, minimizes cumulative errors and enhances overall performance. Extensive quantitative experiments and qualitative analyses validate the superior performance of RAISECity in real-world alignment, shape precision, texture fidelity, and aesthetics level, achieving over a 90% win-rate against existing baselines for overall perceptual quality. This combination of 3D quality, reality alignment, scalability, and seamless compatibility with computer graphics pipelines makes RAISECity a promising foundation for applications in immersive media, embodied intelligence, and world models.
Problem

Research questions and friction points this paper is trying to address.

Addresses quality and scalability challenges in city-scale 3D world generation
Minimizes cumulative errors through dynamic processing and iterative refinement
Enhances reality alignment and fidelity for immersive applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Agentic framework leveraging multimodal foundation tools
Dynamic data processing with iterative self-reflection refinement
Advanced multimodal tools minimizing cumulative errors
🔎 Similar Papers
No similar papers found.
Shengyuan Wang
Shengyuan Wang
Tsinghua University
Z
Zhiheng Zheng
Shenzhen International Graduate School, Tsinghua University, Beijing, China
Yu Shang
Yu Shang
Department of Electronic Engineering, Tsinghua University
Multimodal LearningLLM AgentRecommender System
L
Lixuan He
Department of Electronic Engineering, BNRist, Tsinghua University, Beijing, China
Y
Yangcheng Yu
Department of Electronic Engineering, BNRist, Tsinghua University, Beijing, China
H
Hangyu Fan
Department of Electronic Engineering, BNRist, Tsinghua University, Beijing, China
J
Jie Feng
Department of Electronic Engineering, BNRist, Tsinghua University, Beijing, China
Q
Qingmin Liao
Shenzhen International Graduate School, Tsinghua University, Beijing, China
Y
Yong Li
Department of Electronic Engineering, BNRist, Tsinghua University, Beijing, China