🤖 AI Summary
This work addresses the challenges of generating physically consistent 6-DoF object trajectories in egocentric videos—namely occlusions, rapid motion, and the absence of explicit physical reasoning—by introducing EgoFlow, the first generative framework to apply gradient-guided flow matching to this task. EgoFlow integrates multimodal egocentric observations through a hybrid Mamba-Transformer-Perceiver architecture that jointly models temporal dynamics, scene geometry, and semantic intent. During inference, it enforces differentiable physical constraints, such as collision avoidance and motion smoothness, enabling the synthesis of physically plausible, controllable, and temporally coherent trajectories without post-processing or additional supervision. Experiments demonstrate that EgoFlow significantly outperforms diffusion- and Transformer-based baselines on HD-EPIC, EgoExo4D, and HOT3D, reducing collision rates by up to 79% and exhibiting strong generalization to unseen scenarios.
📝 Abstract
Understanding and predicting object motion from egocentric video is fundamental to embodied perception and interaction. However, generating physically consistent 6DoF trajectories remains challenging due to occlusions, fast motion, and the lack of explicit physical reasoning in existing generative models. We present EgoFlow, a flow-matching framework that synthesizes realistic and physically plausible trajectories conditioned on multimodal egocentric observations. EgoFlow employs a hybrid Mamba-Transformer-Perceiver architecture to jointly model temporal dynamics, scene geometry, and semantic intent, while a gradient-guided inference process enforces differentiable physical constraints such as collision avoidance and motion smoothness. This combination yields coherent and controllable motion generation without post-hoc filtering or additional supervision. Experiments on real-world datasets HD-EPIC, EgoExo4D, and HOT3D show that EgoFlow outperforms diffusion-based and transformer baselines in accuracy, generalization, and physical realism, reducing collision rates by up to 79%, and strong generalization to unseen scenes. Our results highlight the promise of flow-based generative modeling for scalable and physically grounded egocentric motion understanding.