🤖 AI Summary
Existing methods typically reconstruct isolated hands in a local coordinate system, neglecting the surrounding 3D scene and thus failing to support embodied intelligence in understanding physical interactions. This work proposes the first online, monocular video-driven framework for joint 4D hand–scene reconstruction. By introducing a scene-aware visual prompting mechanism, it injects high-fidelity priors from a pretrained hand expert model into a persistent scene memory, enabling collaborative optimization with a 4D foundational scene model. The method simultaneously outputs high-accuracy hand meshes and dense, metric-scale scene geometry in a single forward pass—without requiring offline optimization—and achieves real-time 4D hand–scene reconstruction for the first time. It sets new state-of-the-art performance on both local hand reconstruction and global localization tasks.
📝 Abstract
For Embodied AI, jointly reconstructing dynamic hands and the dense scene context is crucial for understanding physical interaction. However, most existing methods recover isolated hands in local coordinates, overlooking the surrounding 3D environment. To address this, we present Hand3R, the first online framework for joint 4D hand-scene reconstruction from monocular video. Hand3R synergizes a pre-trained hand expert with a 4D scene foundation model via a scene-aware visual prompting mechanism. By injecting high-fidelity hand priors into a persistent scene memory, our approach enables simultaneous reconstruction of accurate hand meshes and dense metric-scale scene geometry in a single forward pass. Experiments demonstrate that Hand3R bypasses the reliance on offline optimization and delivers competitive performance in both local hand reconstruction and global positioning.