๐ค AI Summary
Existing video-language models (VLMs) exhibit limited performance on long-video understanding tasks, primarily due to insufficient temporal modeling capacity. To address this, we propose a training-free, parameter-free, and model-agnostic visual prompting method: multiple video frames are spatially and temporally arranged into a single panel image, which is then fed as input to off-the-shelf VLMsโimplicitly enhancing their long-range temporal reasoning without architectural modifications or parameter updates. Our approach achieves an efficient trade-off between preserving spatial fidelity and maintaining temporal resolution. We comprehensively evaluate it across five mainstream long-video benchmarks. Notably, on the TimeScope (Long) dataset, our method achieves up to a 19.4% absolute improvement in video question answering accuracy, significantly surpassing current state-of-the-art performance ceilings for long-video understanding.
๐ Abstract
Recent Video-Language Models (VLMs) achieve promising results on long-video understanding, but their performance still lags behind that achieved on tasks involving images or short videos. This has led to great interest in improving the long context modeling of VLMs by introducing novel modules and additional complexity. % additional training time. In this paper, we take a different approach: rather than fine-tuning VLMs with the limited data available, we attempt to maximize the performance of existing models. To this end, we propose a novel visual prompting strategy specifically designed for long-video understanding. By combining multiple frames as panels into one image, we effectively trade off spatial details for temporal resolution. Our approach is training-free, parameter-free, and model-agnostic, and can be seamlessly integrated into existing VLMs. Extensive experiments on five established benchmarks across a wide range of model architectures, sizes, and context windows confirm the consistency of our approach. For the TimeScope (Long) dataset, which has the longest videos, the accuracy for video question answering is improved by up to 19.4%. Overall, our method raises the bar for long video understanding models. We will make our code available upon acceptance.