🤖 AI Summary
Long-form video temporal grounding (LVTG) suffers from high computational overhead and poor scalability. To address this, we propose a “Delegate-and-Conquer” paradigm: a lightweight sidekick encoder rapidly extracts frame-level features across the entire video and generates a saliency map; only highly relevant segments—identified via saliency-guided pruning—are forwarded to a high-cost expert encoder for fine-grained processing. Furthermore, we design a query-aware multi-scale temporal aggregation and refinement mechanism to enable cross-scale semantic alignment and precise localization. This dual-encoder collaborative architecture is the first to deeply integrate saliency-guided pruning with query-driven temporal modeling. Evaluated on two major LVTG benchmarks, our method reduces computational cost by 47% while achieving superior localization accuracy over all existing approaches—establishing new state-of-the-art trade-offs between efficiency and precision.
📝 Abstract
Long Video Temporal Grounding (LVTG) aims at identifying specific moments within lengthy videos based on user-provided text queries for effective content retrieval. The approach taken by existing methods of dividing video into clips and processing each clip via a full-scale expert encoder is challenging to scale due to prohibitive computational costs of processing a large number of clips in long videos. To address this issue, we introduce DeCafNet, an approach employing ``delegate-and-conquer'' strategy to achieve computation efficiency without sacrificing grounding performance. DeCafNet introduces a sidekick encoder that performs dense feature extraction over all video clips in a resource-efficient manner, while generating a saliency map to identify the most relevant clips for full processing by the expert encoder. To effectively leverage features from sidekick and expert encoders that exist at different temporal resolutions, we introduce DeCaf-Grounder, which unifies and refines them via query-aware temporal aggregation and multi-scale temporal refinement for accurate grounding. Experiments on two LTVG benchmark datasets demonstrate that DeCafNet reduces computation by up to 47% while still outperforming existing methods, establishing a new state-of-the-art for LTVG in terms of both efficiency and performance. Our code is available at https://github.com/ZijiaLewisLu/CVPR2025-DeCafNet.