Video World Models with Long-term Spatial Memory

๐Ÿ“… 2025-06-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing video world models suffer from limited temporal context windows, leading to environmental inconsistency and geometric forgetting upon scene revisitation. To address this, we propose the first video world model endowed with long-term spatial memory, leveraging a geometry-aligned 3D memory mechanism for cross-frame and cross-episode storage and retrieval of environmental statesโ€”thereby overcoming context-length bottlenecks. Our method integrates multi-view geometric modeling, memory-augmented Transformers, and a novel 3D memory encoding/retrieval module. We further introduce a dedicated long-horizon video-action paired dataset. Experiments demonstrate a 2.3ร— increase in effective context length, a 41% reduction in geometric error during scene re-visitation, and robust coherent generation over hundred-frame sequences. The model significantly outperforms baselines in both spatial-temporal consistency and visual fidelity.

Technology Category

Application Category

๐Ÿ“ Abstract
Emerging world models autoregressively generate video frames in response to actions, such as camera movements and text prompts, among other control signals. Due to limited temporal context window sizes, these models often struggle to maintain scene consistency during revisits, leading to severe forgetting of previously generated environments. Inspired by the mechanisms of human memory, we introduce a novel framework to enhancing long-term consistency of video world models through a geometry-grounded long-term spatial memory. Our framework includes mechanisms to store and retrieve information from the long-term spatial memory and we curate custom datasets to train and evaluate world models with explicitly stored 3D memory mechanisms. Our evaluations show improved quality, consistency, and context length compared to relevant baselines, paving the way towards long-term consistent world generation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing long-term scene consistency in video world models
Reducing forgetting of previously generated environments
Improving quality and context length with 3D memory mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Geometry-grounded long-term spatial memory
Store and retrieve 3D memory mechanisms
Custom datasets for training evaluation
๐Ÿ”Ž Similar Papers
No similar papers found.