🤖 AI Summary
This work addresses the challenges of distribution shift, entanglement between hand and camera motion, and long-term temporal modeling in generating first-person embodied interaction videos from free-space hand gestures. The authors propose an autoregressive generative framework that leverages occlusion-invariant hand conditioning, Plücker ray embeddings to disentangle hand and camera dynamics, and a combination of projected 3D hand meshes, monocular self-supervised annotations, and bidirectional diffusion distillation. This approach enables geometrically consistent and temporally stable generation of first-person videos of arbitrary length. Experiments demonstrate significant improvements over existing methods on three benchmarks in terms of perceptual quality and 3D consistency, while also supporting explicit camera control and long-horizon interactive video synthesis.
📝 Abstract
Egocentric interactive world models are essential for augmented reality and embodied AI, where visual generation must respond to user input with low latency, geometric consistency, and long-term stability. We study egocentric interaction generation from a single scene image under free-space hand gestures, aiming to synthesize photorealistic videos in which hands enter the scene, interact with objects, and induce plausible world dynamics under head motion. This setting introduces fundamental challenges, including distribution shift between free-space gestures and contact-heavy training data, ambiguity between hand motion and camera motion in monocular views, and the need for arbitrary-length video generation. We present Hand2World, a unified autoregressive framework that addresses these challenges through occlusion-invariant hand conditioning based on projected 3D hand meshes, allowing visibility and occlusion to be inferred from scene context rather than encoded in the control signal. To stabilize egocentric viewpoint changes, we inject explicit camera geometry via per-pixel Pl\"ucker-ray embeddings, disentangling camera motion from hand motion and preventing background drift. We further develop a fully automated monocular annotation pipeline and distill a bidirectional diffusion model into a causal generator, enabling arbitrary-length synthesis. Experiments on three egocentric interaction benchmarks show substantial improvements in perceptual quality and 3D consistency while supporting camera control and long-horizon interactive generation.