🤖 AI Summary
This work addresses the challenge of sparse and incomplete 3D LiDAR point clouds in autonomous driving, caused by occlusion and long-range sensing limitations, by proposing the first flow matching–based framework for 3D scene completion. The method mitigates noise prediction bias inherent in diffusion models by aligning the initial distributions during training and inference. It further introduces a nearest-neighbor flow matching loss combined with Chamfer distance loss to jointly preserve fine-grained local details and global structural coherence in the reconstructed point clouds. As the first application of flow matching to 3D LiDAR scene completion, the proposed approach achieves state-of-the-art performance across multiple benchmark metrics.
📝 Abstract
In autonomous driving scenarios, the collected LiDAR point clouds can be challenged by occlusion and long-range sparsity, limiting the perception of autonomous driving systems. Scene completion methods can infer the missing parts of incomplete 3D LiDAR scenes. Recent methods adopt local point-level denoising diffusion probabilistic models, which require predicting Gaussian noise, leading to a mismatch between training and inference initial distributions. This paper introduces the first flow matching framework for 3D LiDAR scene completion, improving upon diffusion-based methods by ensuring consistent initial distributions between training and inference. The model employs a nearest neighbor flow matching loss and a Chamfer distance loss to enhance both local structure and global coverage in the alignment of point clouds. LiFlow achieves state-of-the-art performance across multiple metrics. Code: https://github.com/matteandre/LiFlow.