🤖 AI Summary
This work addresses the sparse supervision and slow convergence of DETR-style detectors caused by one-to-one label assignment, as well as the structural complexity and limited diversity of existing one-to-many approaches. The authors propose LoRA-DETR, which introduces multiple low-rank adaptation (LoRA) branches during training to enable parallel, diverse one-to-many label assignment strategies, thereby enriching the supervision signal. These auxiliary branches are removed at inference time, preserving a lightweight model architecture. Notably, the method requires no modification to the backbone network and achieves parameter-efficient, architecture-agnostic integration of multiple assignment strategies. The study demonstrates that supervision diversity—not merely quantity—is key to performance gains, achieving state-of-the-art results across multiple DETR baselines without incurring additional inference overhead.
📝 Abstract
Label assignment is a critical component in object detectors, particularly within DETR-style frameworks where the one-to-one matching strategy, despite its end-to-end elegance, suffers from slow convergence due to sparse supervision. While recent works have explored one-to-many assignments to enrich supervisory signals, they often introduce complex, architecture-specific modifications and typically focus on a single auxiliary strategy, lacking a unified and scalable design. In this paper, we first systematically investigate the effects of ``one-to-many''supervision and reveal a surprising insight that performance gains are driven not by the sheer quantity of supervision, but by the diversity of the assignment strategies employed. This finding suggests that a more elegant, parameter-efficient approach is attainable. Building on this insight, we propose LoRA-DETR, a flexible and lightweight framework that seamlessly integrates diverse assignment strategies into any DETR-style detector. Our method augments the primary network with multiple Low-Rank Adaptation (LoRA) branches during training, each instantiating a different one-to-many assignment rule. These branches act as auxiliary modules that inject rich, varied supervisory gradients into the main model and are discarded during inference, thus incurring no additional computational cost. This design promotes robust joint optimization while maintaining the architectural simplicity of the original detector. Extensive experiments on different baselines validate the effectiveness of our approach. Our work presents a new paradigm for enhancing detectors, demonstrating that diverse ``one-to-many''supervision can be integrated to achieve state-of-the-art results without compromising model elegance.