🤖 AI Summary
In temporal sentence grounding (TSG), DETR-based models suffer from query role ambiguity and redundant predictions due to the lack of explicit supervision. To address this, we propose a length-aware Transformer framework. Our core innovation is the first explicit incorporation of video-text pair temporal length priors into the architecture, enabling a length-aware query grouping mechanism that assigns distinct duration-specific semantic roles to each query group. We further introduce a joint length classification task to suppress mismatched predictions. Built upon the DETR backbone, our method integrates multi-task learning with cross-modal temporal alignment. It achieves state-of-the-art performance on three mainstream benchmarks. Ablation studies confirm the critical importance of the temporal length prior and validate the effectiveness of each component.
📝 Abstract
Temporal sentence grounding (TSG) is a highly challenging task aiming to localize the temporal segment within an untrimmed video corresponding to a given natural language description. Benefiting from the design of learnable queries, the DETR-based models have achieved substantial advancements in the TSG task. However, the absence of explicit supervision often causes the learned queries to overlap in roles, leading to redundant predictions. Therefore, we propose to improve TSG by making each query fulfill its designated role, leveraging the length priors of the video-description pairs. In this paper, we introduce the Length-Aware Transformer (LATR) for TSG, which assigns different queries to handle predictions based on varying temporal lengths. Specifically, we divide all queries into three groups, responsible for segments with short, middle, and long temporal durations, respectively. During training, an additional length classification task is introduced. Predictions from queries with mismatched lengths are suppressed, guiding each query to specialize in its designated function. Extensive experiments demonstrate the effectiveness of our LATR, achieving state-of-the-art performance on three public benchmarks. Furthermore, the ablation studies validate the contribution of each component of our method and the critical role of incorporating length priors into the TSG task.