🤖 AI Summary
This work addresses the challenges of hand and object pose estimation in first-person operational videos, where severe occlusions and frequent object entry/exit from the field of view hinder accurate reconstruction. Existing methods often fall short in modeling hand-object interactions, handling out-of-view scenarios, and maintaining spatial consistency between hands and objects. To overcome these limitations, we propose the first generative joint reconstruction framework that leverages pretrained generative priors to jointly optimize hand and object trajectories in world coordinates. By integrating object templates with video observations to guide generation, our approach departs from conventional pipelines that independently estimate and subsequently fuse hand and object poses. This unified framework significantly enhances robustness under occlusion and out-of-view conditions while ensuring spatially consistent hand-object interactions, achieving state-of-the-art performance in hand motion estimation, 6D object pose estimation, and interaction reconstruction.
📝 Abstract
Egocentric manipulation videos are highly challenging due to severe occlusions during interactions and frequent object entries and exits from the camera view as the person moves. Current methods typically focus on recovering either hand or object pose in isolation, but both struggle during interactions and fail to handle out-of-sight cases. Moreover, their independent predictions often lead to inconsistent hand-object relations. We introduce WHOLE, a method that holistically reconstructs hand and object motion in world space from egocentric videos given object templates. Our key insight is to learn a generative prior over hand-object motion to jointly reason about their interactions. At test time, the pretrained prior is guided to generate trajectories that conform to the video observations. This joint generative reconstruction substantially outperforms approaches that process hands and objects separately followed by post-processing. WHOLE achieves state-of-the-art performance on hand motion estimation, 6D object pose estimation, and their relative interaction reconstruction. Project website: https://judyye.github.io/whole-www