🤖 AI Summary
To address the weak discriminability of continuous features in temporal video grounding—particularly the difficulty in distinguishing relevant from irrelevant moments—this paper proposes a discrete moment quantization framework. Methodologically, it introduces (1) a learnable discrete codebook coupled with a lossless, clustering-based soft matching mechanism, replacing conventional hard quantization; (2) prior-guided codebook initialization and joint projection to enhance codebook representational quality; and (3) a plug-and-play architectural design for seamless integration. The approach achieves significant improvements over state-of-the-art methods on six mainstream benchmarks. Qualitative analysis demonstrates its ability to effectively aggregate semantically relevant segments while suppressing irrelevant temporal instances, thereby substantially enhancing discriminative capability for temporal localization.
📝 Abstract
Video temporal grounding is a critical video understanding task, which aims to localize moments relevant to a language description. The challenge of this task lies in distinguishing relevant and irrelevant moments. Previous methods focused on learning continuous features exhibit weak differentiation between foreground and background features. In this paper, we propose a novel Moment-Quantization based Video Temporal Grounding method (MQVTG), which quantizes the input video into various discrete vectors to enhance the discrimination between relevant and irrelevant moments. Specifically, MQVTG maintains a learnable moment codebook, where each video moment matches a codeword. Considering the visual diversity, i.e., various visual expressions for the same moment, MQVTG treats moment-codeword matching as a clustering process without using discrete vectors, avoiding the loss of useful information from direct hard quantization. Additionally, we employ effective prior-initialization and joint-projection strategies to enhance the maintained moment codebook. With its simple implementation, the proposed method can be integrated into existing temporal grounding models as a plug-and-play component. Extensive experiments on six popular benchmarks demonstrate the effectiveness and generalizability of MQVTG, significantly outperforming state-of-the-art methods. Further qualitative analysis shows that our method effectively groups relevant features and separates irrelevant ones, aligning with our goal of enhancing discrimination.