🤖 AI Summary
DETR models incur high computational overhead, hindering deployment in resource-constrained settings; existing knowledge distillation (KD) methods struggle to effectively transfer Transformer-specific global contextual information and over-rely on teacher predictions, introducing noise. To address these issues, we propose CLoCKDistill—a Consistent Location and Context-aware Knowledge Distillation framework. Our method introduces a novel encoder memory mechanism that explicitly models global context and positional information; designs ground-truth-guided, target-aware queries to achieve precise student–teacher decoder alignment; and jointly optimizes memory feature distillation, position embedding enhancement, and multi-stage logit matching. Evaluated on KITTI and COCO, CLoCKDistill consistently improves mAP by 2.2–6.4% across diverse lightweight DETR student architectures, significantly alleviating deployment bottlenecks.
📝 Abstract
Object detection has advanced significantly with Detection Transformers (DETRs). However, these models are computationally demanding, posing challenges for deployment in resource-constrained environments (e.g., self-driving cars). Knowledge distillation (KD) is an effective compression method widely applied to CNN detectors, but its application to DETR models has been limited. Most KD methods for DETRs fail to distill transformer-specific global context. Also, they blindly believe in the teacher model, which can sometimes be misleading. To bridge the gaps, this paper proposes Consistent Location-and-Context-aware Knowledge Distillation (CLoCKDistill) for DETR detectors, which includes both feature distillation and logit distillation components. For feature distillation, instead of distilling backbone features like existing KD methods, we distill the transformer encoder output (i.e., memory) that contains valuable global context and long-range dependencies. Also, we enrich this memory with object location details during feature distillation so that the student model can prioritize relevant regions while effectively capturing the global context. To facilitate logit distillation, we create target-aware queries based on the ground truth, allowing both the student and teacher decoders to attend to consistent and accurate parts of encoder memory. Experiments on the KITTI and COCO datasets show our CLoCKDistill method's efficacy across various DETRs, e.g., single-scale DAB-DETR, multi-scale deformable DETR, and denoising-based DINO. Our method boosts student detector performance by 2.2% to 6.4%.