🤖 AI Summary
Video understanding demands effective modeling of temporal dynamics and evolving visual contexts, placing heightened requirements on a model’s spatiotemporal reasoning capabilities. This work systematically surveys existing approaches and constructs a structured analytical framework organized around three perspectives: video geometric modeling, high-level semantic understanding, and unified foundation models. By doing so, it advances the field from task-specific pipelines toward a general-purpose paradigm adaptable to diverse downstream tasks. The study not only synthesizes key methodologies spanning geometric representation, semantic interpretation, and unified modeling but also constructs a domain knowledge graph that illuminates core trends and open challenges on the path toward robust, scalable video foundation models.
📝 Abstract
Video understanding aims to enable models to perceive, reason about, and interact with the dynamic visual world. In contrast to image understanding, video understanding inherently requires modeling temporal dynamics and evolving visual context, placing stronger demands on spatiotemporal reasoning and making it a foundational problem in computer vision. In this survey, we present a structured overview of video understanding by organizing the literature into three complementary perspectives: low-level video geometry understanding, high-level semantic understanding, and unified video understanding models. We further highlight a broader shift from isolated, task-specific pipelines toward unified modeling paradigms that can be adapted to diverse downstream objectives, enabling a more systematic view of recent progress. By consolidating these perspectives, this survey provides a coherent map of the evolving video understanding landscape, summarizes key modeling trends and design principles, and outlines open challenges toward building robust, scalable, and unified video foundation models.