Unsupervised Transcript-assisted Video Summarization and Highlight Detection

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video summarization and highlight detection methods predominantly rely on unimodal visual features (i.e., video frames), failing to effectively incorporate semantic information from automatic speech recognition (ASR) transcripts, and are largely supervised—thus constrained by the scarcity of high-quality annotations. To address these limitations, we propose the first unsupervised multimodal reinforcement learning framework that jointly models video frames and their corresponding ASR transcripts. Our approach leverages cross-modal alignment, multimodal fused representations, and a composite reward function designed to optimize both diversity and representativeness—enabling end-to-end summary generation and highlight localization without any human supervision. Evaluated on multiple benchmarks, our method significantly outperforms vision-only baselines: it achieves a +3.2 ROUGE-L gain in summary compactness and a +5.8% improvement in highlight detection mAP. These results empirically validate the critical contribution of textual semantics to video understanding and demonstrate the efficacy of multimodal RL for unsupervised video analysis.

Technology Category

Application Category

📝 Abstract
Video consumption is a key part of daily life, but watching entire videos can be tedious. To address this, researchers have explored video summarization and highlight detection to identify key video segments. While some works combine video frames and transcripts, and others tackle video summarization and highlight detection using Reinforcement Learning (RL), no existing work, to the best of our knowledge, integrates both modalities within an RL framework. In this paper, we propose a multimodal pipeline that leverages video frames and their corresponding transcripts to generate a more condensed version of the video and detect highlights using a modality fusion mechanism. The pipeline is trained within an RL framework, which rewards the model for generating diverse and representative summaries while ensuring the inclusion of video segments with meaningful transcript content. The unsupervised nature of the training allows for learning from large-scale unannotated datasets, overcoming the challenge posed by the limited size of existing annotated datasets. Our experiments show that using the transcript in video summarization and highlight detection achieves superior results compared to relying solely on the visual content of the video.
Problem

Research questions and friction points this paper is trying to address.

Integrate video frames and transcripts in RL for summarization
Generate diverse summaries with meaningful transcript content
Overcome limited annotated data with unsupervised learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines video frames and transcripts in RL
Uses unsupervised learning for large datasets
Modality fusion enhances summary diversity
🔎 Similar Papers
No similar papers found.
S
Spyros Barbakos
University of Thessaly, Volos, Greece
C
Charalampos Antoniadis
King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
G
G. Potamianos
University of Thessaly, Volos, Greece
Gianluca Setti
Gianluca Setti
Professor of Electronic Engineering, University of Ferrara
Signal ProcessingNonlinear DynamicsComplex NetworksEMIBiomedical Circuits and Systems