RT-DETRv4: Painlessly Furthering Real-Time Object Detection with Vision Foundation Models

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the feature representation degradation in lightweight object detectors caused by network compression—hindering simultaneous real-time inference and high accuracy—this paper proposes a vision foundation model (VFM) knowledge transfer method that requires no backbone architecture modification. Our approach operates within a knowledge distillation framework, introducing zero deployment overhead or inference latency. Key contributions include: (1) a Deep Semantic Injection (DSI) module that achieves stable alignment of high-level VFM semantics into the detector’s feature space; and (2) a Gradient-guided Adaptive Modulation (GAM) strategy that dynamically regulates semantic transfer strength during training. Evaluated on COCO, our method achieves 49.7–57.0 AP at 273–78 FPS—outperforming state-of-the-art real-time detectors. Notably, it is the first to deliver consistent VFM-driven performance gains while maintaining sub-millisecond inference latency.

Technology Category

Application Category

📝 Abstract
Real-time object detection has achieved substantial progress through meticulously designed architectures and optimization strategies. However, the pursuit of high-speed inference via lightweight network designs often leads to degraded feature representation, which hinders further performance improvements and practical on-device deployment. In this paper, we propose a cost-effective and highly adaptable distillation framework that harnesses the rapidly evolving capabilities of Vision Foundation Models (VFMs) to enhance lightweight object detectors. Given the significant architectural and learning objective disparities between VFMs and resource-constrained detectors, achieving stable and task-aligned semantic transfer is challenging. To address this, on one hand, we introduce a Deep Semantic Injector (DSI) module that facilitates the integration of high-level representations from VFMs into the deep layers of the detector. On the other hand, we devise a Gradient-guided Adaptive Modulation (GAM) strategy, which dynamically adjusts the intensity of semantic transfer based on gradient norm ratios. Without increasing deployment and inference overhead, our approach painlessly delivers striking and consistent performance gains across diverse DETR-based models, underscoring its practical utility for real-time detection. Our new model family, RT-DETRv4, achieves state-of-the-art results on COCO, attaining AP scores of 49.7/53.5/55.4/57.0 at corresponding speeds of 273/169/124/78 FPS.
Problem

Research questions and friction points this paper is trying to address.

Enhancing lightweight object detectors with Vision Foundation Models
Addressing feature degradation in high-speed real-time detection systems
Achieving stable semantic transfer across different model architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cost-effective distillation framework using Vision Foundation Models
Deep Semantic Injector module integrates high-level VFM representations
Gradient-guided Adaptive Modulation dynamically adjusts semantic transfer
🔎 Similar Papers
No similar papers found.
Z
Zijun Liao
School of Electronic and Computer Engineering, Peking University, Shenzhen, China
Yian Zhao
Yian Zhao
Peking University
3D Gaussian SplattingMLLM
X
Xin Shan
School of Electronic and Computer Engineering, Peking University, Shenzhen, China
Y
Yu Yan
School of Electronic and Computer Engineering, Peking University, Shenzhen, China
C
Chang Liu
Department of Automation and BNRist, Tsinghua University, Beijing, China
L
Lei Lu
School of Electronic and Computer Engineering, Peking University, Shenzhen, China
X
Xiangyang Ji
Department of Automation and BNRist, Tsinghua University, Beijing, China
J
Jie Chen
School of Electronic and Computer Engineering, Peking University, Shenzhen, China