Self-Supervised Learning of Motion Concepts by Optimizing Counterfactuals

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video motion estimation is a fundamental computer vision task, yet existing approaches rely heavily on synthetic data or scene-specific heuristics, limiting generalization. This paper introduces the first unsupervised motion disentanglement framework grounded in counterfactual optimization: it requires no labeled data or hand-crafted priors, instead leveraging gradient-based counterfactual probing to directly extract optical flow and occlusion maps from a pre-trained world model. By unifying counterfactual reasoning with self-supervised learning, our method achieves interpretable, disentangled, and jointly optimized motion representations. Evaluated on real-world video benchmarks, it establishes new state-of-the-art performance in motion estimation while significantly enhancing model generalizability and cross-scene transferability. The approach provides a reliable, physics-aware motion perception foundation for downstream applications including controllable video generation and embodied intelligence.

Technology Category

Application Category

📝 Abstract
Estimating motion in videos is an essential computer vision problem with many downstream applications, including controllable video generation and robotics. Current solutions are primarily trained using synthetic data or require tuning of situation-specific heuristics, which inherently limits these models' capabilities in real-world contexts. Despite recent developments in large-scale self-supervised learning from videos, leveraging such representations for motion estimation remains relatively underexplored. In this work, we develop Opt-CWM, a self-supervised technique for flow and occlusion estimation from a pre-trained next-frame prediction model. Opt-CWM works by learning to optimize counterfactual probes that extract motion information from a base video model, avoiding the need for fixed heuristics while training on unrestricted video inputs. We achieve state-of-the-art performance for motion estimation on real-world videos while requiring no labeled data.
Problem

Research questions and friction points this paper is trying to address.

Estimating motion in videos for computer vision applications
Overcoming limitations of synthetic data and heuristics in motion estimation
Leveraging self-supervised learning for real-world motion and occlusion estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised learning from video frames
Optimizing counterfactual probes for motion
No labeled data for real-world performance
🔎 Similar Papers
No similar papers found.
Stefan Stojanov
Stefan Stojanov
Postdoc at Stanford Vision Lab and Neuro AI Lab
Computer VisionMachine Learning
D
David Wendt
Stanford University
S
Seungwoo Kim
Stanford University
R
R. Venkatesh
Stanford University
K
Kevin T. Feigelis
Stanford University
J
Jiajun Wu
Stanford University
D
Daniel L. K. Yamins
Stanford University