🤖 AI Summary
To address the feature representation degradation in lightweight object detectors caused by network compression—hindering simultaneous real-time inference and high accuracy—this paper proposes a vision foundation model (VFM) knowledge transfer method that requires no backbone architecture modification. Our approach operates within a knowledge distillation framework, introducing zero deployment overhead or inference latency. Key contributions include: (1) a Deep Semantic Injection (DSI) module that achieves stable alignment of high-level VFM semantics into the detector’s feature space; and (2) a Gradient-guided Adaptive Modulation (GAM) strategy that dynamically regulates semantic transfer strength during training. Evaluated on COCO, our method achieves 49.7–57.0 AP at 273–78 FPS—outperforming state-of-the-art real-time detectors. Notably, it is the first to deliver consistent VFM-driven performance gains while maintaining sub-millisecond inference latency.
📝 Abstract
Real-time object detection has achieved substantial progress through meticulously designed architectures and optimization strategies. However, the pursuit of high-speed inference via lightweight network designs often leads to degraded feature representation, which hinders further performance improvements and practical on-device deployment. In this paper, we propose a cost-effective and highly adaptable distillation framework that harnesses the rapidly evolving capabilities of Vision Foundation Models (VFMs) to enhance lightweight object detectors. Given the significant architectural and learning objective disparities between VFMs and resource-constrained detectors, achieving stable and task-aligned semantic transfer is challenging. To address this, on one hand, we introduce a Deep Semantic Injector (DSI) module that facilitates the integration of high-level representations from VFMs into the deep layers of the detector. On the other hand, we devise a Gradient-guided Adaptive Modulation (GAM) strategy, which dynamically adjusts the intensity of semantic transfer based on gradient norm ratios. Without increasing deployment and inference overhead, our approach painlessly delivers striking and consistent performance gains across diverse DETR-based models, underscoring its practical utility for real-time detection. Our new model family, RT-DETRv4, achieves state-of-the-art results on COCO, attaining AP scores of 49.7/53.5/55.4/57.0 at corresponding speeds of 273/169/124/78 FPS.