🤖 AI Summary
Existing video object state change detection methods only localize start and end timestamps, lacking pixel-level spatial localization of action progression. This work introduces “Spatially Progressive Object State Change Segmentation,” a novel task that requires pixel-level distinction between manipulable and already-transformed regions of an object across video frames. Methodologically: (1) we formally define the task and introduce WhereToChange—the first real-world internet video benchmark for it; (2) we propose a vision-language model (VLM)-driven pseudo-labeling strategy incorporating spatiotemporal feature alignment and state-change dynamics constraints; (3) we design a weakly supervised segmentation architecture leveraging only frame-level state change annotations. Our approach achieves significant improvements over strong baselines across multiple datasets, enabling high-precision joint spatiotemporal localization of state transitions. This establishes a new paradigm for applications such as robotic manipulation progress tracking and interactive video understanding.
📝 Abstract
Object state changes in video reveal critical information about human and agent activity. However, existing methods are limited to temporal localization of when the object is in its initial state (e.g., the unchopped avocado) versus when it has completed a state change (e.g., the chopped avocado), which limits applicability for any task requiring detailed information about the progress of the actions and its spatial localization. We propose to deepen the problem by introducing the spatially-progressing object state change segmentation task. The goal is to segment at the pixel-level those regions of an object that are actionable and those that are transformed. We introduce the first model to address this task, designing a VLM-based pseudo-labeling approach, state-change dynamics constraints, and a novel WhereToChange benchmark built on in-the-wild Internet videos. Experiments on two datasets validate both the challenge of the new task as well as the promise of our model for localizing exactly where and how fast objects are changing in video. We further demonstrate useful implications for tracking activity progress to benefit robotic agents. Project page: https://vision.cs.utexas.edu/projects/spoc-spatially-progressing-osc