GTAD: Global Temporal Aggregation Denoising Learning for 3D Semantic Occupancy Prediction

📅 2025-07-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing dynamic scene perception methods are limited to local temporal modeling between adjacent frames, failing to effectively leverage long-range historical information. To address this, we propose the Global Temporal Aggregation Denoising Network (GTADN), the first framework that integrates latent-space denoising with Transformer-based sequence modeling to jointly fuse local instantaneous features and global multi-frame historical features for coherent and robust 3D semantic occupancy estimation. Our key contributions are: (1) a global temporal aggregation framework that explicitly enforces cross-frame semantic consistency; and (2) a latent-space denoising module that suppresses historical observation noise and enhances temporal generalization. GTADN achieves significant improvements over state-of-the-art methods on nuScenes and Occ3D-nuScenes benchmarks. Ablation studies validate the effectiveness of each component. This work establishes a scalable, temporally aware modeling paradigm for dynamic scene understanding.

Technology Category

Application Category

📝 Abstract
Accurately perceiving dynamic environments is a fundamental task for autonomous driving and robotic systems. Existing methods inadequately utilize temporal information, relying mainly on local temporal interactions between adjacent frames and failing to leverage global sequence information effectively. To address this limitation, we investigate how to effectively aggregate global temporal features from temporal sequences, aiming to achieve occupancy representations that efficiently utilize global temporal information from historical observations. For this purpose, we propose a global temporal aggregation denoising network named GTAD, introducing a global temporal information aggregation framework as a new paradigm for holistic 3D scene understanding. Our method employs an in-model latent denoising network to aggregate local temporal features from the current moment and global temporal features from historical sequences. This approach enables the effective perception of both fine-grained temporal information from adjacent frames and global temporal patterns from historical observations. As a result, it provides a more coherent and comprehensive understanding of the environment. Extensive experiments on the nuScenes and Occ3D-nuScenes benchmark and ablation studies demonstrate the superiority of our method.
Problem

Research questions and friction points this paper is trying to address.

Enhances 3D semantic occupancy prediction using global temporal aggregation
Improves dynamic environment perception for autonomous driving systems
Aggregates local and global temporal features for coherent scene understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Global temporal aggregation for 3D occupancy prediction
Latent denoising network for feature integration
Combines local and global temporal information
🔎 Similar Papers
No similar papers found.
T
Tianhao Li
Department of Computer and Technology, Fudan University, Shanghai, China
Y
Yang Li
School of Computer Science and Technology, East China Normal University, Shanghai, China
M
Mengtian Li
Shanghai Film Academy of Shanghai University, Shanghai, China
Y
Yisheng Deng
Department of Computer and Technology, Fudan University, Shanghai, China
Weifeng Ge
Weifeng Ge
Fudan University
Humanoid RobotComputer VisionArtificial IntelligenceAI4Science