🤖 AI Summary
Egocentric vision presents challenges including high heterogeneity across multimodal signals (RGB, depth, camera pose, gaze), frequent modality missingness, and difficulty modeling dynamic motion. Method: This paper proposes the first unified multimodal framework for 4D egocentric perception and synthesis. It introduces a temporal-aware multimodal tokenizer and a masked self-supervised pretraining scheme to enable cross-task shared representations and efficient utilization of incomplete multimodal data—without pseudo-labels. A joint perception-generation architecture with lightweight cross-modal attention is further designed to enhance generalization and efficiency. Contribution/Results: The framework achieves performance on par with or surpassing task-specific models on four benchmarks: gaze prediction, camera trajectory estimation, monocular depth estimation, and conditional video synthesis—while accelerating inference by 10×.
📝 Abstract
Understanding multimodal signals in egocentric vision, such as RGB video, depth, camera poses, and gaze, is essential for applications in augmented reality, robotics, and human-computer interaction. These capabilities enable systems to better interpret the camera wearer's actions, intentions, and surrounding environment. However, building large-scale egocentric multimodal and multitask models presents unique challenges. Egocentric data are inherently heterogeneous, with large variations in modality coverage across devices and settings. Generating pseudo-labels for missing modalities, such as gaze or head-mounted camera trajectories, is often infeasible, making standard supervised learning approaches difficult to scale. Furthermore, dynamic camera motion and the complex temporal and spatial structure of first-person video pose additional challenges for the direct application of existing multimodal foundation models. To address these challenges, we introduce a set of efficient temporal tokenizers and propose EgoM2P, a masked modeling framework that learns from temporally aware multimodal tokens to train a large, general-purpose model for egocentric 4D understanding. This unified design supports multitasking across diverse egocentric perception and synthesis tasks, including gaze prediction, egocentric camera tracking, and monocular depth estimation from egocentric video. EgoM2P also serves as a generative model for conditional egocentric video synthesis. Across these tasks, EgoM2P matches or outperforms specialist models while being an order of magnitude faster. We will fully open-source EgoM2P to support the community and advance egocentric vision research. Project page: https://egom2p.github.io/