🤖 AI Summary
Existing hand-object interaction video generation methods struggle to simultaneously achieve scalability and interaction fidelity due to inherent limitations of 2D or 3D representations. To address this, we propose a novel structure- and contact-aware representation that requires no 3D annotations, explicitly modeling contact states, occlusion relationships, and geometric structural constraints. Our approach adopts a shared-specialized joint generation paradigm, integrating spatiotemporal consistency modeling under 2D supervision to precisely capture complex physical interactions and enable generalization to open-world scenes. Evaluated on two real-world datasets, our method significantly outperforms state-of-the-art approaches, generating physically plausible, temporally coherent, high-fidelity interaction videos. It achieves substantial improvements across three key dimensions: interaction fidelity (e.g., accurate contact localization and force alignment), dynamic coherence (e.g., smooth motion transitions and consistent object dynamics), and cross-scene generalization (e.g., robust performance under unseen object geometries and hand poses).
📝 Abstract
Generating realistic hand-object interactions (HOI) videos is a significant challenge due to the difficulty of modeling physical constraints (e.g., contact and occlusion between hands and manipulated objects). Current methods utilize HOI representation as an auxiliary generative objective to guide video synthesis. However, there is a dilemma between 2D and 3D representations that cannot simultaneously guarantee scalability and interaction fidelity. To address this limitation, we propose a structure and contact-aware representation that captures hand-object contact, hand-object occlusion, and holistic structure context without 3D annotations. This interaction-oriented and scalable supervision signal enables the model to learn fine-grained interaction physics and generalize to open-world scenarios. To fully exploit the proposed representation, we introduce a joint-generation paradigm with a share-and-specialization strategy that generates interaction-oriented representations and videos. Extensive experiments demonstrate that our method outperforms state-of-the-art methods on two real-world datasets in generating physics-realistic and temporally coherent HOI videos. Furthermore, our approach exhibits strong generalization to challenging open-world scenarios, highlighting the benefit of our scalable design. Our project page is https://hgzn258.github.io/SCAR/.