🤖 AI Summary
Video summarization requires selecting keyframes that are both visually diverse and semantically representative; however, existing methods neglect fine-grained object-level semantics and their strong correlation with the video’s central theme, while language-guided approaches struggle to model complex inter-object semantic relationships inherent in real-world videos. To address this, we propose VideoGraph—a language-guided recursive spatiotemporal graph network. VideoGraph constructs a cross-modal spatiotemporal graph where objects and frames serve as nodes and semantic relations as edges. It incorporates language queries to enrich node representations and employs a recursive graph optimization mechanism to dynamically refine graph structure, enabling object-level semantic awareness for keyframe selection. Evaluated on multiple general-purpose and query-focused video summarization benchmarks, VideoGraph achieves state-of-the-art performance under both supervised and unsupervised settings.
📝 Abstract
Video summarization aims to select keyframes that are visually diverse and can represent the whole story of a given video. Previous approaches have focused on global interlinkability between frames in a video by temporal modeling. However, fine-grained visual entities, such as objects, are also highly related to the main content of the video. Moreover, language-guided video summarization, which has recently been studied, requires a comprehensive linguistic understanding of complex real-world videos. To consider how all the objects are semantically related to each other, this paper regards video summarization as a language-guided spatiotemporal graph modeling problem. We present recursive spatiotemporal graph networks, called VideoGraph, which formulate the objects and frames as nodes of the spatial and temporal graphs, respectively. The nodes in each graph are connected and aggregated with graph edges, representing the semantic relationships between the nodes. To prevent the edges from being configured with visual similarity, we incorporate language queries derived from the video into the graph node representations, enabling them to contain semantic knowledge. In addition, we adopt a recursive strategy to refine initial graphs and correctly classify each frame node as a keyframe. In our experiments, VideoGraph achieves state-of-the-art performance on several benchmarks for generic and query-focused video summarization in both supervised and unsupervised manners. The code is available at https://github.com/park-jungin/videograph.