🤖 AI Summary
Existing video understanding models struggle to effectively maintain and update spatiotemporal world states in long videos and lack fine-grained evaluation protocols. To address this, this work proposes VCBench, a novel benchmark that introduces streaming counting tasks as probes to systematically assess models’ ability to track world states across eight dimensions—including object and event visibility, accumulation, and periodicity. Built upon 406 long videos, VCBench features a frame-level annotated dataset with 10,071 state-change moments and 1,000 timestamped multi-point question-answer pairs. Evaluation using metrics of numerical accuracy, trajectory consistency, and temporal awareness reveals significant deficiencies in current video-language models, particularly in handling periodic events, thereby demonstrating VCBench’s diagnostic utility for world state reasoning.
📝 Abstract
Video understanding requires models to continuously track and update world state during playback. While existing benchmarks have advanced video understanding evaluation across multiple dimensions, the observation of how models maintain world state remains insufficient. We propose VCBench, a streaming counting benchmark that repositions counting as a minimal probe for diagnosing world state maintenance capability. We decompose this capability into object counting (tracking currently visible objects vs.\ tracking cumulative unique identities) and event counting (detecting instantaneous actions vs.\ tracking complete activity cycles), forming 8 fine-grained subcategories. VCBench contains 406 videos with frame-by-frame annotations of 10,071 event occurrence moments and object state change moments, generating 1,000 streaming QA pairs with 4,576 query points along timelines. By observing state maintenance trajectories through streaming multi-point queries, we design three complementary metrics to diagnose numerical precision, trajectory consistency, and temporal awareness. Evaluation on mainstream video-language models shows that current models still exhibit significant deficiencies in spatial-temporal state maintenance, particularly struggling with tasks like periodic event counting. VCBench provides a diagnostic framework for measuring and improving state maintenance in video understanding systems.