Generalizable Knowledge Distillation from Vision Foundation Models for Semantic Segmentation

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing knowledge distillation methods in semantic segmentation often fail to preserve the out-of-distribution generalization capability of vision foundation models under distribution shifts. To overcome this limitation, the authors propose a general knowledge distillation framework, GKD, which decouples representation learning from task-specific learning and employs a two-stage training strategy. In the first stage, selective feature distillation combined with representation freezing transfers generalizable spatial knowledge; in the second stage, a query-based soft distillation mechanism refines task performance. Evaluated on five domain generalization benchmarks, GKD significantly outperforms current state-of-the-art methods, achieving average mIoU improvements of 1.9% and 10.6% under F2F and F2L distillation settings, respectively, thereby enabling efficient and robust knowledge transfer.

Technology Category

Application Category

📝 Abstract
Knowledge distillation (KD) has been widely applied in semantic segmentation to compress large models, but conventional approaches primarily preserve in-domain accuracy while neglecting out-of-domain generalization, which is essential under distribution shifts. This limitation becomes more severe with the emergence of vision foundation models (VFMs): although VFMs exhibit strong robustness on unseen data, distilling them with conventional KD often compromises this ability. We propose Generalizable Knowledge Distillation (GKD), a multi-stage framework that explicitly enhances generalization. GKD decouples representation learning from task learning. In the first stage, the student acquires domain-agnostic representations through selective feature distillation, and in the second stage, these representations are frozen for task adaptation, thereby mitigating overfitting to visible domains. To further support transfer, we introduce a query-based soft distillation mechanism, where student features act as queries to teacher representations to selectively retrieve transferable spatial knowledge from VFMs. Extensive experiments on five domain generalization benchmarks demonstrate that GKD consistently outperforms existing KD methods, achieving average gains of +1.9% in foundation-to-foundation (F2F) and +10.6% in foundation-to-local (F2L) distillation. The code will be available at https://github.com/Younger-hua/GKD.
Problem

Research questions and friction points this paper is trying to address.

knowledge distillation
semantic segmentation
domain generalization
vision foundation models
out-of-domain generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalizable Knowledge Distillation
Vision Foundation Models
Domain Generalization
Semantic Segmentation
Query-based Distillation
🔎 Similar Papers
No similar papers found.
C
Chonghua Lv
School of Artificial Intelligence, Xidian University, China
D
Dong Zhao
Department of Information Engineering and Computer Science, University of Trento, Italy
Shuang Wang
Shuang Wang
Xidian University
computer visionremote sensing image processingdeep learningobject detectionsemantic segmentation
Dou Quan
Dou Quan
Xidian University
computer visiondeep learning
N
Ning Huyan
Department of Automation, Tsinghua University, China
Nicu Sebe
Nicu Sebe
University of Trento
computer visionmultimedia
Zhun Zhong
Zhun Zhong
Hefei University of Technology & University of Nottingham
Computer Vision