AGILE: Hand-Object Interaction Reconstruction from Video via Agentic Generation

πŸ“… 2026-02-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing methods for reconstructing hand–object interactions from monocular videos often fail to produce complete, simulation-ready geometry under severe occlusions or in-the-wild conditions due to their reliance on neural rendering and fragile structure-from-motion (SfM) initialization. This work proposes AGILE, a novel agent-based generative reconstruction framework that dispenses with conventional SfM. AGILE leverages a vision-language model (VLM) to guide the generation of complete, textured object meshes, integrates anchor-based pose propagation with contact-aware optimization, and enforces a joint constraint combining semantic, geometric, and interaction stability. Evaluated on HO3D, DexYCB, and in-the-wild videos, AGILE significantly outperforms prior approaches, yielding high-fidelity, physically plausible dynamic interaction assets directly usable in robotic simulation.

Technology Category

Application Category

πŸ“ Abstract
Reconstructing dynamic hand-object interactions from monocular videos is critical for dexterous manipulation data collection and creating realistic digital twins for robotics and VR. However, current methods face two prohibitive barriers: (1) reliance on neural rendering often yields fragmented, non-simulation-ready geometries under heavy occlusion, and (2) dependence on brittle Structure-from-Motion (SfM) initialization leads to frequent failures on in-the-wild footage. To overcome these limitations, we introduce AGILE, a robust framework that shifts the paradigm from reconstruction to agentic generation for interaction learning. First, we employ an agentic pipeline where a Vision-Language Model (VLM) guides a generative model to synthesize a complete, watertight object mesh with high-fidelity texture, independent of video occlusions. Second, bypassing fragile SfM entirely, we propose a robust anchor-and-track strategy. We initialize the object pose at a single interaction onset frame using a foundation model and propagate it temporally by leveraging the strong visual similarity between our generated asset and video observations. Finally, a contact-aware optimization integrates semantic, geometric, and interaction stability constraints to enforce physical plausibility. Extensive experiments on HO3D, DexYCB, and in-the-wild videos reveal that AGILE outperforms baselines in global geometric accuracy while demonstrating exceptional robustness on challenging sequences where prior art frequently collapses. By prioritizing physical validity, our method produces simulation-ready assets validated via real-to-sim retargeting for robotic applications.
Problem

Research questions and friction points this paper is trying to address.

hand-object interaction
monocular video reconstruction
occlusion
Structure-from-Motion
simulation-ready geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

agentic generation
hand-object interaction
simulation-ready reconstruction
occlusion-robust tracking
contact-aware optimization
πŸ”Ž Similar Papers
No similar papers found.
J
Jin-Chuan Shi
State Key Lab of CAD & CG, Zhejiang University
B
Binhong Ye
State Key Lab of CAD & CG, Zhejiang University
T
Tao Liu
State Key Lab of CAD & CG, Zhejiang University
Junzhe He
Junzhe He
ETH Zurich
Reinforcement LearningRobot Learning
Y
Yangjinhui Xu
State Key Lab of CAD & CG, Zhejiang University
X
Xiaoyang Liu
State Key Lab of CAD & CG, Zhejiang University
Zeju Li
Zeju Li
Zhejiang University
Hao Chen
Hao Chen
Zhejiang University
Computer Science
Chunhua Shen
Chunhua Shen
Zhejiang University
Computer VisionMachine Learning