🤖 AI Summary
This work addresses the challenge of maintaining long-term 3D consistency in video generation when revisiting scenes, particularly under complex occlusions and scale ambiguities. To this end, the authors propose an implicit 3D-aware memory mechanism that leverages intermediate features from a pre-trained feed-forward novel view synthesis (FF-NVS) model to compute view-dependent relevance scores for robust retrieval of historical frames. A 3D-aligned memory injection module then implicitly warps the retrieved content to adaptively guide generation. Notably, the approach operates without explicit 3D reconstruction and represents the first effort to integrate implicit 3D awareness into both memory retrieval and injection. Experiments demonstrate that the method outperforms state-of-the-art approaches in terms of revisit consistency, generation fidelity, and camera control accuracy.
📝 Abstract
Despite remarkable progress in video generation, maintaining long-term scene consistency upon revisiting previously explored areas remains challenging. Existing solutions rely either on explicitly constructing 3D geometry, which suffers from error accumulation and scale ambiguity, or on naive camera Field-of-View (FoV) retrieval, which typically fails under complex occlusions. To overcome these limitations, we propose I3DM, a novel implicit 3D-aware memory mechanism for consistent video scene generation that bypasses explicit 3D reconstruction. At the core of our approach is a 3D-aware memory retrieval strategy, which leverages the intermediate features of a pre-trained Feed-Forward Novel View Synthesis (FF-NVS) model to score view relevance, enabling robust retrieval even in highly occluded scenarios. Furthermore, to fully utilize the retrieved historical frames, we introduce a 3D-aligned memory injection module. This module implicitly warps historical content to the target view and adaptively conditions the generation on reliable warping regions, leading to improved revisit consistency and accurate camera control. Extensive experiments demonstrate that our method outperforms state-of-the-art approaches, achieving superior revisit consistency, generation fidelity, and camera control precision.