ViSMaP: Unsupervised Hour-long Video Summarisation by Meta-Prompting

📅 2025-04-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long-form video understanding faces three core challenges: critical event sparsity, absence of pre-segmentation, and scarcity of high-quality annotations; existing supervised methods rely on costly and inconsistent manual annotations for long videos. To address this, ViSMaP introduces the first unsupervised long-video summarization framework. It employs a meta-prompting iterative mechanism that leverages a pre-trained short-video understanding model to generate initial segment descriptions, which then drive a tripartite large language model—comprising generation, evaluation, and prompt-optimization roles—to collaboratively refine high-fidelity pseudo-summaries through self-iteration, followed by pseudo-supervised fine-tuning. Crucially, ViSMaP requires no human annotations for long videos and supports automatic summarization of hour-long videos. It achieves performance on par with fully supervised state-of-the-art methods across multiple benchmarks and demonstrates strong cross-domain generalization, establishing a novel paradigm for low-resource long-form video understanding.

Technology Category

Application Category

📝 Abstract
We introduce ViSMap: Unsupervised Video Summarisation by Meta Prompting, a system to summarise hour long videos with no-supervision. Most existing video understanding models work well on short videos of pre-segmented events, yet they struggle to summarise longer videos where relevant events are sparsely distributed and not pre-segmented. Moreover, long-form video understanding often relies on supervised hierarchical training that needs extensive annotations which are costly, slow and prone to inconsistency. With ViSMaP we bridge the gap between short videos (where annotated data is plentiful) and long ones (where it's not). We rely on LLMs to create optimised pseudo-summaries of long videos using segment descriptions from short ones. These pseudo-summaries are used as training data for a model that generates long-form video summaries, bypassing the need for expensive annotations of long videos. Specifically, we adopt a meta-prompting strategy to iteratively generate and refine creating pseudo-summaries of long videos. The strategy leverages short clip descriptions obtained from a supervised short video model to guide the summary. Each iteration uses three LLMs working in sequence: one to generate the pseudo-summary from clip descriptions, another to evaluate it, and a third to optimise the prompt of the generator. This iteration is necessary because the quality of the pseudo-summaries is highly dependent on the generator prompt, and varies widely among videos. We evaluate our summaries extensively on multiple datasets; our results show that ViSMaP achieves performance comparable to fully supervised state-of-the-art models while generalising across domains without sacrificing performance. Code will be released upon publication.
Problem

Research questions and friction points this paper is trying to address.

Summarizing hour-long videos without supervision
Bridging gap between short and long video understanding
Eliminating need for costly long video annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised meta-prompting for video summarization
LLMs generate pseudo-summaries from short clips
Iterative refinement with three LLMs improves quality
🔎 Similar Papers
No similar papers found.