🤖 AI Summary
This work addresses the challenge that existing knowledge distillation methods in semantic segmentation often fail to preserve the out-of-distribution generalization capability of vision foundation models under distribution shifts. To overcome this limitation, the authors propose a general knowledge distillation framework, GKD, which decouples representation learning from task-specific learning and employs a two-stage training strategy. In the first stage, selective feature distillation combined with representation freezing transfers generalizable spatial knowledge; in the second stage, a query-based soft distillation mechanism refines task performance. Evaluated on five domain generalization benchmarks, GKD significantly outperforms current state-of-the-art methods, achieving average mIoU improvements of 1.9% and 10.6% under F2F and F2L distillation settings, respectively, thereby enabling efficient and robust knowledge transfer.
📝 Abstract
Knowledge distillation (KD) has been widely applied in semantic segmentation to compress large models, but conventional approaches primarily preserve in-domain accuracy while neglecting out-of-domain generalization, which is essential under distribution shifts. This limitation becomes more severe with the emergence of vision foundation models (VFMs): although VFMs exhibit strong robustness on unseen data, distilling them with conventional KD often compromises this ability. We propose Generalizable Knowledge Distillation (GKD), a multi-stage framework that explicitly enhances generalization. GKD decouples representation learning from task learning. In the first stage, the student acquires domain-agnostic representations through selective feature distillation, and in the second stage, these representations are frozen for task adaptation, thereby mitigating overfitting to visible domains. To further support transfer, we introduce a query-based soft distillation mechanism, where student features act as queries to teacher representations to selectively retrieve transferable spatial knowledge from VFMs. Extensive experiments on five domain generalization benchmarks demonstrate that GKD consistently outperforms existing KD methods, achieving average gains of +1.9% in foundation-to-foundation (F2F) and +10.6% in foundation-to-local (F2L) distillation. The code will be available at https://github.com/Younger-hua/GKD.