EgoExo-Gen: Ego-centric Video Prediction by Watching Exo-centric Videos

📅 2025-04-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses cross-view video prediction: generating future egocentric video frames given an exocentric video, the first egocentric frame, and a textual instruction. Motivated by applications in augmented reality and embodied intelligence, we propose the first two-stage framework that explicitly models hand-object interaction (HOI) dynamics. In Stage I, a vision foundation model coupled with cross-view spatiotemporal correspondence generates pseudo-HOI masks. In Stage II, these masks serve as structural priors to guide a video diffusion model for HOI-aware generation. To our knowledge, this is the first method to incorporate HOI dynamics into cross-view video prediction, enabling semantically consistent and structurally precise outputs. Our approach significantly outperforms state-of-the-art methods on the Ego-Exo4D and H2O benchmarks, with substantial improvements in hand and interactive object generation quality.

Technology Category

Application Category

📝 Abstract
Generating videos in the first-person perspective has broad application prospects in the field of augmented reality and embodied intelligence. In this work, we explore the cross-view video prediction task, where given an exo-centric video, the first frame of the corresponding ego-centric video, and textual instructions, the goal is to generate futur frames of the ego-centric video. Inspired by the notion that hand-object interactions (HOI) in ego-centric videos represent the primary intentions and actions of the current actor, we present EgoExo-Gen that explicitly models the hand-object dynamics for cross-view video prediction. EgoExo-Gen consists of two stages. First, we design a cross-view HOI mask prediction model that anticipates the HOI masks in future ego-frames by modeling the spatio-temporal ego-exo correspondence. Next, we employ a video diffusion model to predict future ego-frames using the first ego-frame and textual instructions, while incorporating the HOI masks as structural guidance to enhance prediction quality. To facilitate training, we develop an automated pipeline to generate pseudo HOI masks for both ego- and exo-videos by exploiting vision foundation models. Extensive experiments demonstrate that our proposed EgoExo-Gen achieves better prediction performance compared to previous video prediction models on the Ego-Exo4D and H2O benchmark datasets, with the HOI masks significantly improving the generation of hands and interactive objects in the ego-centric videos.
Problem

Research questions and friction points this paper is trying to address.

Predict future ego-centric video frames from exo-centric videos
Model hand-object interactions for cross-view video generation
Enhance prediction quality using HOI masks as structural guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-view HOI mask prediction model
Video diffusion model with HOI guidance
Automated pseudo HOI mask generation
🔎 Similar Papers
No similar papers found.