🤖 AI Summary
This paper addresses the challenge of unifying knowledge distillation from heterogeneous multi-task vision teacher models—spanning 2D perception and 3D understanding. To this end, we propose a heterogeneous teacher co-distillation paradigm and introduce DUNE, a lightweight universal encoder. Methodologically, DUNE incorporates teacher-specific feature encoding, adaptive cross-modal feature alignment, and heterogeneous input adaptation to jointly model diverse tasks—including 2D classification/segmentation, depth estimation, 3D reconstruction, and map-free relocalization. To our knowledge, DUNE is the first framework enabling unified, cross-modal and cross-task knowledge distillation. It matches or surpasses the performance of individual large-scale teacher models on multiple 2D and 3D benchmarks. Notably, on map-free relocalization, DUNE significantly outperforms MASt3R while reducing parameter count by an order of magnitude.
📝 Abstract
Recent multi-teacher distillation methods have unified the encoders of multiple foundation models into a single encoder, achieving competitive performance on core vision tasks like classification, segmentation, and depth estimation. This led us to ask: Could similar success be achieved when the pool of teachers also includes vision models specialized in diverse tasks across both 2D and 3D perception? In this paper, we define and investigate the problem of heterogeneous teacher distillation, or co-distillation, a challenging multi-teacher distillation scenario where teacher models vary significantly in both (a) their design objectives and (b) the data they were trained on. We explore data-sharing strategies and teacher-specific encoding, and introduce DUNE, a single encoder excelling in 2D vision, 3D understanding, and 3D human perception. Our model achieves performance comparable to that of its larger teachers, sometimes even outperforming them, on their respective tasks. Notably, DUNE surpasses MASt3R in Map-free Visual Relocalization with a much smaller encoder.