🤖 AI Summary
To address the challenge of transferring two-dimensional (2D) vision foundation models (VFMs) to three-dimensional (3D) point cloud semantic segmentation, this paper proposes DITR—a novel framework for cross-modal knowledge transfer. First, it introduces a cross-modal feature projection mechanism that maps self-supervised 2D image features (e.g., from DINO) into 3D space and fuses them into a point cloud encoder (e.g., PointNeXt). Second, it employs a knowledge distillation-based pretraining strategy to transfer discriminative representations from 2D VFMs to the 3D backbone. To our knowledge, DITR is the first method enabling end-to-end adaptation of 2D VFMs to pure point cloud segmentation—without requiring RGB inputs during inference. Extensive experiments demonstrate state-of-the-art performance on ScanNet, S3DIS, and SemanticKITTI, achieving significant mIoU gains and effectively overcoming deployment bottlenecks in image-deprived scenarios.
📝 Abstract
Vision foundation models (VFMs) trained on large-scale image datasets provide high-quality features that have significantly advanced 2D visual recognition. However, their potential in 3D vision remains largely untapped, despite the common availability of 2D images alongside 3D point cloud datasets. While significant research has been dedicated to 2D-3D fusion, recent state-of-the-art 3D methods predominantly focus on 3D data, leaving the integration of VFMs into 3D models underexplored. In this work, we challenge this trend by introducing DITR, a simple yet effective approach that extracts 2D foundation model features, projects them to 3D, and finally injects them into a 3D point cloud segmentation model. DITR achieves state-of-the-art results on both indoor and outdoor 3D semantic segmentation benchmarks. To enable the use of VFMs even when images are unavailable during inference, we further propose to distill 2D foundation models into a 3D backbone as a pretraining task. By initializing the 3D backbone with knowledge distilled from 2D VFMs, we create a strong basis for downstream 3D segmentation tasks, ultimately boosting performance across various datasets.