🤖 AI Summary
Existing video summarization and highlight detection methods predominantly rely on unimodal visual features (i.e., video frames), failing to effectively incorporate semantic information from automatic speech recognition (ASR) transcripts, and are largely supervised—thus constrained by the scarcity of high-quality annotations. To address these limitations, we propose the first unsupervised multimodal reinforcement learning framework that jointly models video frames and their corresponding ASR transcripts. Our approach leverages cross-modal alignment, multimodal fused representations, and a composite reward function designed to optimize both diversity and representativeness—enabling end-to-end summary generation and highlight localization without any human supervision. Evaluated on multiple benchmarks, our method significantly outperforms vision-only baselines: it achieves a +3.2 ROUGE-L gain in summary compactness and a +5.8% improvement in highlight detection mAP. These results empirically validate the critical contribution of textual semantics to video understanding and demonstrate the efficacy of multimodal RL for unsupervised video analysis.
📝 Abstract
Video consumption is a key part of daily life, but watching entire videos can be tedious. To address this, researchers have explored video summarization and highlight detection to identify key video segments. While some works combine video frames and transcripts, and others tackle video summarization and highlight detection using Reinforcement Learning (RL), no existing work, to the best of our knowledge, integrates both modalities within an RL framework. In this paper, we propose a multimodal pipeline that leverages video frames and their corresponding transcripts to generate a more condensed version of the video and detect highlights using a modality fusion mechanism. The pipeline is trained within an RL framework, which rewards the model for generating diverse and representative summaries while ensuring the inclusion of video segments with meaningful transcript content. The unsupervised nature of the training allows for learning from large-scale unannotated datasets, overcoming the challenge posed by the limited size of existing annotated datasets. Our experiments show that using the transcript in video summarization and highlight detection achieves superior results compared to relying solely on the visual content of the video.