🤖 AI Summary
This work addresses the lack of dedicated evaluation benchmarks for multimodal large language models (MLLMs) in fast-paced, high-complexity advertising video understanding. We introduce VideoAds, the first task-specific benchmark for this domain, encompassing three core tasks: visual grounding, video summarization, and visual reasoning. To rigorously assess temporal modeling capabilities, we propose the first quantitative definition of advertising video complexity and an FPS-sensitive temporal modeling evaluation methodology. The benchmark employs fine-grained human annotation and a unified multi-task evaluation framework. Experimental results reveal that the open-source model Qwen2.5-VL-72B achieves 73.35% accuracy on summarization and reasoning tasks—surpassing GPT-4o (66.82%) and Gemini-1.5 Pro (69.66%)—highlighting the untapped potential of open MLLMs in temporally intensive advertising video analysis. Human experts attain 94.27% accuracy. The dataset, code, and evaluation toolkit are fully open-sourced.
📝 Abstract
Advertisement videos serve as a rich and valuable source of purpose-driven information, encompassing high-quality visual, textual, and contextual cues designed to engage viewers. They are often more complex than general videos of similar duration due to their structured narratives and rapid scene transitions, posing significant challenges to multi-modal large language models (MLLMs). In this work, we introduce VideoAds, the first dataset tailored for benchmarking the performance of MLLMs on advertisement videos. VideoAds comprises well-curated advertisement videos with complex temporal structures, accompanied by extbf{manually} annotated diverse questions across three core tasks: visual finding, video summary, and visual reasoning. We propose a quantitative measure to compare VideoAds against existing benchmarks in terms of video complexity. Through extensive experiments, we find that Qwen2.5-VL-72B, an opensource MLLM, achieves 73.35% accuracy on VideoAds, outperforming GPT-4o (66.82%) and Gemini-1.5 Pro (69.66%); the two proprietary models especially fall behind the opensource model in video summarization and reasoning, but perform the best in visual finding. Notably, human experts easily achieve a remarkable accuracy of 94.27%. These results underscore the necessity of advancing MLLMs' temporal modeling capabilities and highlight VideoAds as a potentially pivotal benchmark for future research in understanding video that requires high FPS sampling. The dataset and evaluation code will be publicly available at https://videoadsbenchmark.netlify.app.