🤖 AI Summary
This work addresses the problem of predicting the future 6-degree-of-freedom pose and 3D trajectory of rigid objects from egocentric video. To this end, the authors propose an object-centric, explicit 3D dynamics model that enables geometrically consistent and temporally coherent predictions through end-to-end learning. By integrating video segmentation, mesh reconstruction, and pose estimation, they construct a large-scale pseudo-ground-truth dataset comprising over two million video clips. This is the first approach to achieve object-level, geometrically interpretable 3D dynamics modeling from passive visual observations. Experiments demonstrate that the method significantly outperforms existing approaches in prediction accuracy, geometric consistency, and generalization to unseen objects and scenes, establishing a scalable new paradigm for object-centric dynamic modeling.
📝 Abstract
Humans can effortlessly anticipate how objects might move or change through interaction--imagining a cup being lifted, a knife slicing, or a lid being closed. We aim to endow computational systems with a similar ability to predict plausible future object motions directly from passive visual observation. We introduce ObjectForesight, a 3D object-centric dynamics model that predicts future 6-DoF poses and trajectories of rigid objects from short egocentric video sequences. Unlike conventional world or dynamics models that operate in pixel or latent space, ObjectForesight represents the world explicitly in 3D at the object level, enabling geometrically grounded and temporally coherent predictions that capture object affordances and trajectories. To train such a model at scale, we leverage recent advances in segmentation, mesh reconstruction, and 3D pose estimation to curate a dataset of 2 million plus short clips with pseudo-ground-truth 3D object trajectories. Through extensive experiments, we show that ObjectForesight achieves significant gains in accuracy, geometric consistency, and generalization to unseen objects and scenes, establishing a scalable framework for learning physically grounded, object-centric dynamics models directly from observation. objectforesight.github.io