🤖 AI Summary
Existing dynamic scene perception methods are limited to local temporal modeling between adjacent frames, failing to effectively leverage long-range historical information. To address this, we propose the Global Temporal Aggregation Denoising Network (GTADN), the first framework that integrates latent-space denoising with Transformer-based sequence modeling to jointly fuse local instantaneous features and global multi-frame historical features for coherent and robust 3D semantic occupancy estimation. Our key contributions are: (1) a global temporal aggregation framework that explicitly enforces cross-frame semantic consistency; and (2) a latent-space denoising module that suppresses historical observation noise and enhances temporal generalization. GTADN achieves significant improvements over state-of-the-art methods on nuScenes and Occ3D-nuScenes benchmarks. Ablation studies validate the effectiveness of each component. This work establishes a scalable, temporally aware modeling paradigm for dynamic scene understanding.
📝 Abstract
Accurately perceiving dynamic environments is a fundamental task for autonomous driving and robotic systems. Existing methods inadequately utilize temporal information, relying mainly on local temporal interactions between adjacent frames and failing to leverage global sequence information effectively. To address this limitation, we investigate how to effectively aggregate global temporal features from temporal sequences, aiming to achieve occupancy representations that efficiently utilize global temporal information from historical observations. For this purpose, we propose a global temporal aggregation denoising network named GTAD, introducing a global temporal information aggregation framework as a new paradigm for holistic 3D scene understanding. Our method employs an in-model latent denoising network to aggregate local temporal features from the current moment and global temporal features from historical sequences. This approach enables the effective perception of both fine-grained temporal information from adjacent frames and global temporal patterns from historical observations. As a result, it provides a more coherent and comprehensive understanding of the environment. Extensive experiments on the nuScenes and Occ3D-nuScenes benchmark and ablation studies demonstrate the superiority of our method.