Follow the Saliency: Supervised Saliency for Retrieval-augmented Dense Video Captioning

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of accurately aligning event boundaries in retrieval-augmented dense video captioning (DVC), a task hindered by existing methods’ reliance on heuristic strategies that disregard ground-truth event annotations. To overcome this limitation, the authors propose the STaRC framework, which introduces— for the first time—a highlight detection module trained with binary labels automatically derived from DVC ground-truth data. This module provides frame-level saliency supervision, serving as a unified temporal signal to guide both retrieval and caption generation. By integrating saliency-guided temporal segmentation and injecting saliency-aware prompts into the decoder, STaRC achieves precise event boundary alignment and contextually grounded description generation. Experiments on the YouCook2 and ViTT benchmarks demonstrate state-of-the-art performance across most metrics, with significant improvements in temporal segmentation accuracy and caption quality.

Technology Category

Application Category

📝 Abstract
Existing retrieval-augmented approaches for Dense Video Captioning (DVC) often fail to achieve accurate temporal segmentation aligned with true event boundaries, as they rely on heuristic strategies that overlook ground truth event boundaries. The proposed framework, \textbf{STaRC}, overcomes this limitation by supervising frame-level saliency through a highlight detection module. Note that the highlight detection module is trained on binary labels derived directly from DVC ground truth annotations without the need for additional annotation. We also propose to utilize the saliency scores as a unified temporal signal that drives retrieval via saliency-guided segmentation and informs caption generation through explicit Saliency Prompts injected into the decoder. By enforcing saliency-constrained segmentation, our method produces temporally coherent segments that align closely with actual event transitions, leading to more accurate retrieval and contextually grounded caption generation. We conduct comprehensive evaluations on the YouCook2 and ViTT benchmarks, where STaRC achieves state-of-the-art performance across most of the metrics. Our code is available at https://github.com/ermitaju1/STaRC
Problem

Research questions and friction points this paper is trying to address.

Dense Video Captioning
temporal segmentation
event boundaries
retrieval-augmented
saliency
Innovation

Methods, ideas, or system contributions that make the work stand out.

saliency supervision
retrieval-augmented dense video captioning
highlight detection
saliency-guided segmentation
saliency prompts
🔎 Similar Papers
No similar papers found.