Zero-shot Reconstruction of In-Scene Object Manipulation from Video

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the ill-posedness of scene-level object manipulation reconstruction from monocular RGB video, hand-object depth ambiguity, and lack of physical plausibility, this paper proposes the first zero-shot, scene-centric joint reconstruction framework. Methodologically, it departs from conventional hand-centric paradigms and instead integrates CLIP/SAM/3D diffusion priors for initialization, coupled with differentiable rendering, multi-view geometric constraints, contact-force regularization, and a two-stage co-optimization scheme—enabling simultaneous estimation of hand pose, object deformation and pose, and scene geometry without ground-truth annotations. The approach significantly improves metric consistency and physical realism, achieving centimeter-level accuracy and high temporal coherence even under severe occlusion and dynamic motion. It establishes a novel paradigm for real-scale, joint hand–object–scene inference.

Technology Category

Application Category

📝 Abstract
We build the first system to address the problem of reconstructing in-scene object manipulation from a monocular RGB video. It is challenging due to ill-posed scene reconstruction, ambiguous hand-object depth, and the need for physically plausible interactions. Existing methods operate in hand centric coordinates and ignore the scene, hindering metric accuracy and practical use. In our method, we first use data-driven foundation models to initialize the core components, including the object mesh and poses, the scene point cloud, and the hand poses. We then apply a two-stage optimization that recovers a complete hand-object motion from grasping to interaction, which remains consistent with the scene information observed in the input video.
Problem

Research questions and friction points this paper is trying to address.

Reconstructs in-scene object manipulation from monocular video
Addresses ill-posed scene reconstruction and ambiguous hand-object depth
Ensures physically plausible interactions with scene consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-driven foundation models initialize object, scene, and hand components
Two-stage optimization recovers complete hand-object motion from grasping to interaction
Ensures motion consistency with scene information from input video
🔎 Similar Papers
No similar papers found.