Minimal Clips, Maximum Salience: Long Video Summarization via Key Moment Extraction

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long-video multimodal summarization suffers from loss of visually salient information and high computational overhead. Method: This paper proposes a semantic-saliency-driven, segment-level key-moment extraction paradigm: lightweight video captioning models first generate segment descriptions; a large language model (LLM) then selects the top-K semantically most salient segments to construct the summary. An automatic evaluation framework—MovieSum script alignment—is introduced to assess segment quality without human annotations. Results: The method reconstructs complete plot summaries using less than 6% of the original video frames, achieving performance on MovieSum comparable to human-curated reference segments and substantially outperforming random sampling. Inference cost is reduced by an order of magnitude. This work is the first to integrate semantic saliency modeling with sparse segment selection, empirically demonstrating that extremely low-density key segments suffice for high-quality multimodal understanding.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) are able to process increasingly longer videos. Yet, important visual information is easily lost throughout the entire context and missed by VLMs. Also, it is important to design tools that enable cost-effective analysis of lengthy video content. In this paper, we propose a clip selection method that targets key video moments to be included in a multimodal summary. We divide the video into short clips and generate compact visual descriptions of each using a lightweight video captioning model. These are then passed to a large language model (LLM), which selects the K clips containing the most relevant visual information for a multimodal summary. We evaluate our approach on reference clips for the task, automatically derived from full human-annotated screenplays and summaries in the MovieSum dataset. We further show that these reference clips (less than 6% of the movie) are sufficient to build a complete multimodal summary of the movies in MovieSum. Using our clip selection method, we achieve a summarization performance close to that of these reference clips while capturing substantially more relevant video information than random clip selection. Importantly, we maintain low computational cost by relying on a lightweight captioning model.
Problem

Research questions and friction points this paper is trying to address.

Extracts key video moments for summaries
Reduces computational cost in video analysis
Improves visual information retention in VLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight video captioning model generates clip descriptions
Large language model selects most relevant clips for summary
Method maintains low computational cost while maximizing salience
🔎 Similar Papers
No similar papers found.