🤖 AI Summary
To address the inability of existing LiDAR point cloud panoptic segmentation models to recognize unknown-category objects in autonomous driving’s open-world setting, this paper proposes the first uncertainty-guided framework based on Dirichlet evidence learning. Methodologically, it introduces three uncertainty-driven losses—uniform evidence, adaptive separation, and contrastive uncertainty—and integrates them with a multi-branch decoder for joint semantic/uncertainty prediction, prototype embedding, and instance center regression, enabling robust discrimination between known and unknown objects at both global and fine-grained levels. We establish the nuScenes Open-Set benchmark and extend the KITTI-360 evaluation protocol. Experiments demonstrate that our method significantly outperforms state-of-the-art approaches on both benchmarks, achieving, for the first time, end-to-end open-set panoptic segmentation and precise localization of unknown instances in LiDAR point clouds.
📝 Abstract
Autonomous vehicles that navigate in open-world environments may encounter previously unseen object classes. However, most existing LiDAR panoptic segmentation models rely on closed-set assumptions, failing to detect unknown object instances. In this work, we propose ULOPS, an uncertainty-guided open-set panoptic segmentation framework that leverages Dirichlet-based evidential learning to model predictive uncertainty. Our architecture incorporates separate decoders for semantic segmentation with uncertainty estimation, embedding with prototype association, and instance center prediction. During inference, we leverage uncertainty estimates to identify and segment unknown instances. To strengthen the model's ability to differentiate between known and unknown objects, we introduce three uncertainty-driven loss functions. Uniform Evidence Loss to encourage high uncertainty in unknown regions. Adaptive Uncertainty Separation Loss ensures a consistent difference in uncertainty estimates between known and unknown objects at a global scale. Contrastive Uncertainty Loss refines this separation at the fine-grained level. To evaluate open-set performance, we extend benchmark settings on KITTI-360 and introduce a new open-set evaluation for nuScenes. Extensive experiments demonstrate that ULOPS consistently outperforms existing open-set LiDAR panoptic segmentation methods.