Length Matters: Length-Aware Transformer for Temporal Sentence Grounding

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In temporal sentence grounding (TSG), DETR-based models suffer from query role ambiguity and redundant predictions due to the lack of explicit supervision. To address this, we propose a length-aware Transformer framework. Our core innovation is the first explicit incorporation of video-text pair temporal length priors into the architecture, enabling a length-aware query grouping mechanism that assigns distinct duration-specific semantic roles to each query group. We further introduce a joint length classification task to suppress mismatched predictions. Built upon the DETR backbone, our method integrates multi-task learning with cross-modal temporal alignment. It achieves state-of-the-art performance on three mainstream benchmarks. Ablation studies confirm the critical importance of the temporal length prior and validate the effectiveness of each component.

Technology Category

Application Category

📝 Abstract
Temporal sentence grounding (TSG) is a highly challenging task aiming to localize the temporal segment within an untrimmed video corresponding to a given natural language description. Benefiting from the design of learnable queries, the DETR-based models have achieved substantial advancements in the TSG task. However, the absence of explicit supervision often causes the learned queries to overlap in roles, leading to redundant predictions. Therefore, we propose to improve TSG by making each query fulfill its designated role, leveraging the length priors of the video-description pairs. In this paper, we introduce the Length-Aware Transformer (LATR) for TSG, which assigns different queries to handle predictions based on varying temporal lengths. Specifically, we divide all queries into three groups, responsible for segments with short, middle, and long temporal durations, respectively. During training, an additional length classification task is introduced. Predictions from queries with mismatched lengths are suppressed, guiding each query to specialize in its designated function. Extensive experiments demonstrate the effectiveness of our LATR, achieving state-of-the-art performance on three public benchmarks. Furthermore, the ablation studies validate the contribution of each component of our method and the critical role of incorporating length priors into the TSG task.
Problem

Research questions and friction points this paper is trying to address.

Improving temporal sentence grounding with length-aware queries
Reducing redundant predictions via explicit query role assignment
Enhancing TSG accuracy using temporal length priors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Length-Aware Transformer for temporal grounding
Queries grouped by short, middle, long durations
Length classification suppresses mismatched predictions
🔎 Similar Papers
No similar papers found.
Y
Yifan Wang
School of Intelligence Science and Technology, University of Science and Technology Beijing
Z
Ziyi Liu
School of Intelligence Science and Technology, University of Science and Technology Beijing
Xiaolong Sun
Xiaolong Sun
Xi'an Jiaotong University
multimodal learning
J
Jiawei Wang
School of Intelligence Science and Technology, University of Science and Technology Beijing
Hongmin Liu
Hongmin Liu
School of Intelligence Science and Technology, University of Science and Technology Beijing