🤖 AI Summary
Existing video understanding models operate primarily at the linguistic level, lacking systematic modeling of spatial hyper-perception—including semantic awareness, streaming event cognition, implicit 3D reasoning, and predictive world modeling—while prevailing benchmarks fail to assess higher-order spatial intelligence. Method: We introduce a novel paradigm of spatial hyper-perception and present VSI-SUPER, the first comprehensive benchmark spanning the full spatiotemporal reasoning spectrum, featuring long-horizon visual recall and continual spatial counting tasks. We further propose a predictive perception framework driven by self-supervised next-latent-frame prediction error, enabling dynamic memory updating and event segmentation. Contribution/Results: Trained on the VSI-590K dataset, our Cambrian-S model achieves a 30% absolute gain on VSI-Bench, substantially outperforming closed-source baselines and, for the first time, empirically validating the critical role of prediction-driven mechanisms in spatial hyper-perception.
📝 Abstract
We argue that progress in true multimodal intelligence calls for a shift from reactive, task-driven systems and brute-force long context towards a broader paradigm of supersensing. We frame spatial supersensing as four stages beyond linguistic-only understanding: semantic perception (naming what is seen), streaming event cognition (maintaining memory across continuous experiences), implicit 3D spatial cognition (inferring the world behind pixels), and predictive world modeling (creating internal models that filter and organize information). Current benchmarks largely test only the early stages, offering narrow coverage of spatial cognition and rarely challenging models in ways that require true world modeling. To drive progress in spatial supersensing, we present VSI-SUPER, a two-part benchmark: VSR (long-horizon visual spatial recall) and VSC (continual visual spatial counting). These tasks require arbitrarily long video inputs yet are resistant to brute-force context expansion. We then test data scaling limits by curating VSI-590K and training Cambrian-S, achieving +30% absolute improvement on VSI-Bench without sacrificing general capabilities. Yet performance on VSI-SUPER remains limited, indicating that scale alone is insufficient for spatial supersensing. We propose predictive sensing as a path forward, presenting a proof-of-concept in which a self-supervised next-latent-frame predictor leverages surprise (prediction error) to drive memory and event segmentation. On VSI-SUPER, this approach substantially outperforms leading proprietary baselines, showing that spatial supersensing requires models that not only see but also anticipate, select, and organize experience.