🤖 AI Summary
To address the poor generalization of LiDAR semantic segmentation under cross-domain scenarios, this paper proposes an unsupervised image-to-point-cloud knowledge distillation framework guided by Vision Foundation Models (VFMs). The method tackles domain shift without requiring target-domain annotations. Its core contributions are threefold: (1) a generic LiDAR backbone pretraining strategy that freezes the backbone and fine-tunes only the MLP head, enabling rapid adaptation to diverse domain shifts; (2) incorporation of robust VFM-derived visual features to supervise point cloud feature learning; and (3) a multimodal feature alignment mechanism to mitigate the modality gap between images and point clouds. Evaluated on four mainstream cross-domain benchmarks, the approach achieves state-of-the-art performance, significantly narrowing the inter-domain accuracy gap. Results demonstrate superior generalization capability and practical applicability for unsupervised domain adaptation in 3D semantic segmentation.
📝 Abstract
Semantic segmentation networks trained under full supervision for one type of lidar fail to generalize to unseen lidars without intervention. To reduce the performance gap under domain shifts, a recent trend is to leverage vision foundation models (VFMs) providing robust features across domains. In this work, we conduct an exhaustive study to identify recipes for exploiting VFMs in unsupervised domain adaptation for semantic segmentation of lidar point clouds. Building upon unsupervised image-to-lidar knowledge distillation, our study reveals that: (1) the architecture of the lidar backbone is key to maximize the generalization performance on a target domain; (2) it is possible to pretrain a single backbone once and for all, and use it to address many domain shifts; (3) best results are obtained by keeping the pretrained backbone frozen and training an MLP head for semantic segmentation. The resulting pipeline achieves state-of-the-art results in four widely-recognized and challenging settings. The code will be available at: https://github.com/valeoai/muddos.