🤖 AI Summary
Existing methods struggle to generate fully dense, high-accuracy depth ground truth in large-scale dynamic scenes, hindering progress in vision algorithms. To address this, we propose DOC-Depth: a novel framework that constructs a dense and temporally consistent 3D environment via LiDAR odometry, then jointly performs Dynamic Object Classification (DOC) to automatically detect and remove motion-induced occlusions. By integrating multi-frame geometric consistency fusion with dense depth interpolation optimization, DOC-Depth achieves, for the first time, fully automatic, occlusion-free, and pixel-complete depth ground truth generation in dynamic scenes. The method is sensor-agnostic—compatible with arbitrary LiDARs—and scale-invariant across spatiotemporal domains, ensuring strong generalizability. On KITTI, it increases depth map density from 16.1% to 71.2%. We further release the first fully dense depth-annotated dataset, establishing a new benchmark for depth estimation in dynamic environments.
📝 Abstract
Accurate depth information is essential for many computer vision applications. Yet, no available dataset recording method allows for fully dense accurate depth estimation in a large scale dynamic environment. In this paper, we introduce DOC-Depth, a novel, efficient and easy-to-deploy approach for dense depth generation from any LiDAR sensor. After reconstructing consistent dense 3D environment using LiDAR odometry, we address dynamic objects occlusions automatically thanks to DOC, our state-of-the art dynamic object classification method. Additionally, DOC-Depth is fast and scalable, allowing for the creation of unbounded datasets in terms of size and time. We demonstrate the effectiveness of our approach on the KITTI dataset, improving its density from 16.1% to 71.2% and release this new fully dense depth annotation, to facilitate future research in the domain. We also showcase results using various LiDAR sensors and in multiple environments. All software components are publicly available for the research community.