🤖 AI Summary
Existing video segmentation methods for long videos require predefined priors—such as thresholds, target segment counts, or length constraints—leading to inaccurate boundary localization. To address this, we propose a fully parameter-free video segmentation framework that automatically identifies semantically coherent segment boundaries without any human-specified hyperparameters. Our approach innovatively integrates the Minimum Description Length (MDL) principle with dynamic programming, enabling principled, data-driven boundary detection. It employs frame-level multimodal feature modeling and an optimized boundary search strategy to closely approximate the intrinsic structural semantics of real-world scenes. Evaluated on long-video summarization and retrieval-augmented video question answering, our method consistently outperforms state-of-the-art segmentation baselines. Downstream task performance improves significantly, demonstrating strong generalizability and practical utility across diverse video understanding applications.
📝 Abstract
The proliferation of creative video content has driven demand for adapting language models to handle video input and enable multimodal understanding. However, end-to-end models struggle to process long videos due to their size and complexity. An effective alternative is to divide them into smaller chunks to be processed separately, and this motivates a method for choosing where the chunk boundaries should be. In this paper, we propose an algorithm for segmenting videos into contiguous chunks, based on the minimum description length principle, coupled with a dynamic programming search. The algorithm is entirely parameter-free, given feature vectors, not requiring a set threshold or the number or size of chunks to be specified. We show empirically that the breakpoints it produces more accurately approximate scene boundaries in long videos, compared with existing methods for scene detection, even when such methods have access to the true number of scenes. We then showcase this algorithm in two tasks: long video summarization, and retrieval-augmented video question answering. In both cases, scene breaks produced by our algorithm lead to better downstream performance than existing methods for video segmentation.