EvoWorld: Evolving Panoramic World Generation with Explicit 3D Memory

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of lacking spatial consistency and geometric plausibility in long-horizon panoramic video generation. We propose a generative framework based on evolutionary 3D memory. Methodologically, we introduce an explicit dynamic 3D memory mechanism, guided spatiotemporally by geometric reconstruction outputs; it jointly integrates a fine-grained viewpoint-controlled video generator with a feed-forward Transformer to enable online 3D structural evolution, while enforcing spatial constraints for target viewpoints via geometric reprojection. Our contributions are fourfold: (1) the first method enabling long-horizon, loopable virtual navigation from a single panoramic image; (2) significant improvements in visual fidelity and geometric consistency of generated videos; (3) state-of-the-art performance across synthetic outdoor, Habitat indoor, and real-world scenes; and (4) the first comprehensive benchmark specifically designed for evaluating long-horizon exploration capabilities.

Technology Category

Application Category

📝 Abstract
Humans possess a remarkable ability to mentally explore and replay 3D environments they have previously experienced. Inspired by this mental process, we present EvoWorld: a world model that bridges panoramic video generation with evolving 3D memory to enable spatially consistent long-horizon exploration. Given a single panoramic image as input, EvoWorld first generates future video frames by leveraging a video generator with fine-grained view control, then evolves the scene's 3D reconstruction using a feedforward plug-and-play transformer, and finally synthesizes futures by conditioning on geometric reprojections from this evolving explicit 3D memory. Unlike prior state-of-the-arts that synthesize videos only, our key insight lies in exploiting this evolving 3D reconstruction as explicit spatial guidance for the video generation process, projecting the reconstructed geometry onto target viewpoints to provide rich spatial cues that significantly enhance both visual realism and geometric consistency. To evaluate long-range exploration capabilities, we introduce the first comprehensive benchmark spanning synthetic outdoor environments, Habitat indoor scenes, and challenging real-world scenarios, with particular emphasis on loop-closure detection and spatial coherence over extended trajectories. Extensive experiments demonstrate that our evolving 3D memory substantially improves visual fidelity and maintains spatial scene coherence compared to existing approaches, representing a significant advance toward long-horizon spatially consistent world modeling.
Problem

Research questions and friction points this paper is trying to address.

Generating long-horizon panoramic videos with spatial consistency
Evolving 3D memory for geometric guidance in video synthesis
Enhancing visual realism and coherence in world modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates panoramic videos using fine-grained view control
Evolves 3D reconstruction with plug-and-play transformer module
Enhances realism via geometric reprojections from 3D memory
🔎 Similar Papers
No similar papers found.