🤖 AI Summary
Current vision-language models struggle to accurately comprehend dynamic differences between video pairs, particularly in motion continuity, event evolution, and editing consistency. To address this gap, we introduce the Video Difference Captioning (ViDiC) task—the first systematic formulation of fine-grained description targeting seven dynamic change categories, including subject, style, and motion. We construct ViDiC-1K, the first large-scale benchmark dataset explicitly designed for video contrast understanding, comprising 1,000 carefully curated video pairs with expert-annotated difference descriptions. We further propose a dual-checklist evaluation framework that decouples similarity assessment from difference identification, enabling granular, interpretable evaluation. Leveraging an LLM-as-a-Judge protocol calibrated with human-annotated contrast checklists, we quantitatively assess 19 state-of-the-art multimodal models. Experiments reveal substantial performance deficits in dynamic scene comparison, underscoring ViDiC-1K’s rigor and necessity as a high-challenge benchmark for advancing video contrast understanding.
📝 Abstract
Understanding visual differences between dynamic scenes requires the comparative perception of compositional, spatial, and temporal changes--a capability that remains underexplored in existing vision-language systems. While prior work on Image Difference Captioning (IDC) has enabled models to describe semantic changes between static images, these approaches fail to capture motion continuity, event evolution, or editing consistency over time. We introduce the ViDiC (Video Difference Captioning) task and its corresponding ViDiC-1K dataset, designed to evaluate the ability of Multimodal Large Language Models (MLLMs) to provide fine-grained descriptions of similarities and differences between video pairs. ViDiC-1K comprises 1,000 curated video pairs annotated with over 4,000 comparative checklist items, covering seven categories: subject, style, background, cinematography, motion, location, and playback techniques. To ensure reliable evaluation, we propose a dual-checklist framework that measures the accuracy of similarity and difference separately, based on the LLM-as-a-Judge protocol. Experiments on nineteen representative multimodal models reveal a significant performance gap in their comparative description and difference perception abilities. We hope ViDiC-1K can be a challenging benchmark that lays a solid foundation for advancing video understanding, edit awareness, and comparative reasoning in multimodal intelligence.