Street Gaussians without 3D Object Tracker

πŸ“… 2024-12-07
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
High-fidelity 3D reconstruction of street scenes under fast-moving objects remains challenging in autonomous driving, as existing methods rely either on labor-intensive pose annotations or generalization-poor 3D trackers, compromising robustness. Method: We propose an end-to-end dynamic scene reconstruction framework that eliminates the need for explicit 3D tracking. Our approach introduces a novel stable tracking module integrating 2D depth-aware object tracking with 3D Gaussian splatting (Street Gaussians), coupled with self-correcting motion learning in implicit feature space to rectify trajectory errors and recover missed detections. Additionally, multi-frame feature fusion enhances temporal consistency. Results: Evaluated on Waymo-NOTR and KITTI, our method achieves significant improvements in dynamic object reconstruction fidelity and cross-scene generalization, outperforming current state-of-the-art approaches.

Technology Category

Application Category

πŸ“ Abstract
Realistic scene reconstruction in driving scenarios poses significant challenges due to fast-moving objects. Most existing methods rely on labor-intensive manual labeling of object poses to reconstruct dynamic objects in canonical space and move them based on these poses during rendering. While some approaches attempt to use 3D object trackers to replace manual annotations, the limited generalization of 3D trackers -- caused by the scarcity of large-scale 3D datasets -- results in inferior reconstructions in real-world settings. In contrast, 2D foundation models demonstrate strong generalization capabilities. To eliminate the reliance on 3D trackers and enhance robustness across diverse environments, we propose a stable object tracking module by leveraging associations from 2D deep trackers within a 3D object fusion strategy. We address inevitable tracking errors by further introducing a motion learning strategy in an implicit feature space that autonomously corrects trajectory errors and recovers missed detections. Experimental results on Waymo-NOTR and KITTI show that our method outperforms existing approaches. Our code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Eliminates reliance on 3D trackers for scene reconstruction.
Enhances robustness in diverse driving environments.
Corrects tracking errors autonomously using motion learning.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages 2D deep trackers for 3D object fusion
Introduces motion learning in implicit feature space
Autonomously corrects trajectory errors and missed detections
πŸ”Ž Similar Papers
No similar papers found.