VCBench: A Streaming Counting Benchmark for Spatial-Temporal State Maintenance in Long Videos

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video understanding models struggle to effectively maintain and update spatiotemporal world states in long videos and lack fine-grained evaluation protocols. To address this, this work proposes VCBench, a novel benchmark that introduces streaming counting tasks as probes to systematically assess models’ ability to track world states across eight dimensions—including object and event visibility, accumulation, and periodicity. Built upon 406 long videos, VCBench features a frame-level annotated dataset with 10,071 state-change moments and 1,000 timestamped multi-point question-answer pairs. Evaluation using metrics of numerical accuracy, trajectory consistency, and temporal awareness reveals significant deficiencies in current video-language models, particularly in handling periodic events, thereby demonstrating VCBench’s diagnostic utility for world state reasoning.

Technology Category

Application Category

📝 Abstract
Video understanding requires models to continuously track and update world state during playback. While existing benchmarks have advanced video understanding evaluation across multiple dimensions, the observation of how models maintain world state remains insufficient. We propose VCBench, a streaming counting benchmark that repositions counting as a minimal probe for diagnosing world state maintenance capability. We decompose this capability into object counting (tracking currently visible objects vs.\ tracking cumulative unique identities) and event counting (detecting instantaneous actions vs.\ tracking complete activity cycles), forming 8 fine-grained subcategories. VCBench contains 406 videos with frame-by-frame annotations of 10,071 event occurrence moments and object state change moments, generating 1,000 streaming QA pairs with 4,576 query points along timelines. By observing state maintenance trajectories through streaming multi-point queries, we design three complementary metrics to diagnose numerical precision, trajectory consistency, and temporal awareness. Evaluation on mainstream video-language models shows that current models still exhibit significant deficiencies in spatial-temporal state maintenance, particularly struggling with tasks like periodic event counting. VCBench provides a diagnostic framework for measuring and improving state maintenance in video understanding systems.
Problem

Research questions and friction points this paper is trying to address.

video understanding
state maintenance
streaming counting
spatial-temporal reasoning
world state tracking
Innovation

Methods, ideas, or system contributions that make the work stand out.

streaming counting
spatial-temporal state maintenance
video understanding benchmark
world state tracking
fine-grained evaluation
🔎 Similar Papers
No similar papers found.
P
Pengyiang Liu
Institute of Artificial Intelligence, Beihang University, Beijing, China
Z
Zhongyue Shi
Institute of Artificial Intelligence, Beihang University, Beijing, China
H
Hongye Hao
Institute of Artificial Intelligence, Beihang University, Beijing, China
Q
Qi Fu
Institute of Artificial Intelligence, Beihang University, Beijing, China
X
Xueting Bi
Institute of Artificial Intelligence, Beihang University, Beijing, China
Siwei Zhang
Siwei Zhang
ETH Zurich
3D human pose estimationhuman-scene interactions
X
Xiaoyang Hu
Institute of Artificial Intelligence, Beihang University, Beijing, China
Z
Zitian Wang
Institute of Artificial Intelligence, Beihang University, Beijing, China
Linjiang Huang
Linjiang Huang
BUAA<<CUHK<<CASIA
Computer VisionPattern RecognitionMachine Learning
Si Liu
Si Liu
Fred Hutchinson Cancer Center
GenomicsBiostatisticsAnomaly DetectionOpen Category Detection