Mr. DETR: Instructive Multi-Route Training for Detection Transformers

๐Ÿ“… 2024-12-13
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the training inefficiency of detection Transformers, this paper proposes a multi-path collaborative training framework. While retaining the standard one-to-one matching as the primary path, it introduces two auxiliary paths to jointly optimize a one-to-many assignment objective. Key contributions include: (1) the first instruction-guided self-attention mechanism, which dynamically steers object queries toward one-to-many predictions; and (2) empirical validation that decoder modules can independently accommodate dual-objective learningโ€”yielding training gains without any inference overhead. The method unifies one-to-one and one-to-many supervision under shared weights, requiring no architectural modifications or additional inference cost. Evaluated on COCO and other benchmarks, our approach boosts AP by 1.2โ€“2.4 points across DETR variants, while preserving identical inference speed to the baseline models.

Technology Category

Application Category

๐Ÿ“ Abstract
Existing methods enhance the training of detection transformers by incorporating an auxiliary one-to-many assignment. In this work, we treat the model as a multi-task framework, simultaneously performing one-to-one and one-to-many predictions. We investigate the roles of each component in the transformer decoder across these two training targets, including self-attention, cross-attention, and feed-forward network. Our empirical results demonstrate that any independent component in the decoder can effectively learn both targets simultaneously, even when other components are shared. This finding leads us to propose a multi-route training mechanism, featuring a primary route for one-to-one prediction and two auxiliary training routes for one-to-many prediction. We enhance the training mechanism with a novel instructive self-attention that dynamically and flexibly guides object queries for one-to-many prediction. The auxiliary routes are removed during inference, ensuring no impact on model architecture or inference cost. We conduct extensive experiments on various baselines, achieving consistent improvements as shown in Figure 1. Project page: https://visual-ai.github.io/mrdetr
Problem

Research questions and friction points this paper is trying to address.

Enhancing detection transformers with multi-task one-to-one and one-to-many predictions
Investigating decoder components' roles in simultaneous training targets
Proposing multi-route training with instructive self-attention for improved performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-route training with one-to-one and one-to-many predictions
Instructive self-attention dynamically guides object queries
Auxiliary routes removed during inference to maintain efficiency
๐Ÿ”Ž Similar Papers
No similar papers found.