🤖 AI Summary
This work identifies a critical deficiency in current video-language models (VLMs): their inability to robustly perform periodic state-transition reasoning—specifically, recognizing cyclic patterns, modeling temporal dependencies, and extracting quantitative spatiotemporal information. To address this gap, we introduce CycliST, the first synthetic benchmark explicitly designed to evaluate VLMs’ periodic reasoning capabilities. CycliST features multi-object cyclic motion, orbital trajectories, and time-varying attributes (e.g., color, scale), with a hierarchical difficulty taxonomy. Our key innovations are: (1) systematic decoupling of periodicity, temporality, and quantitative reasoning dimensions; and (2) incorporation of controlled perturbations (e.g., lighting variations, occlusions) to assess robustness. Extensive experiments reveal that state-of-the-art VLMs consistently fail to reliably identify periodic structures; performance degrades sharply with increasing task complexity; and no single model dominates across all tasks—highlighting a fundamental unsolved challenge in video-language understanding.
📝 Abstract
We present CycliST, a novel benchmark dataset designed to evaluate Video Language Models (VLM) on their ability for textual reasoning over cyclical state transitions. CycliST captures fundamental aspects of real-world processes by generating synthetic, richly structured video sequences featuring periodic patterns in object motion and visual attributes. CycliST employs a tiered evaluation system that progressively increases difficulty through variations in the number of cyclic objects, scene clutter, and lighting conditions, challenging state-of-the-art models on their spatio-temporal cognition. We conduct extensive experiments with current state-of-the-art VLMs, both open-source and proprietary, and reveal their limitations in generalizing to cyclical dynamics such as linear and orbital motion, as well as time-dependent changes in visual attributes like color and scale. Our results demonstrate that present-day VLMs struggle to reliably detect and exploit cyclic patterns, lack a notion of temporal understanding, and are unable to extract quantitative insights from scenes, such as the number of objects in motion, highlighting a significant technical gap that needs to be addressed. More specifically, we find no single model consistently leads in performance: neither size nor architecture correlates strongly with outcomes, and no model succeeds equally well across all tasks. By providing a targeted challenge and a comprehensive evaluation framework, CycliST paves the way for visual reasoning models that surpass the state-of-the-art in understanding periodic patterns.