Aligning Foundation Model Priors and Diffusion-Based Hand Interactions for Occlusion-Resistant Two-Hand Reconstruction

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular 3D hand reconstruction suffers from severe occlusion, pose misalignment, and interpenetration artifacts caused by dynamic hand–hand interactions. To address these challenges, we propose a fusion-aligned encoder that distills 2D priors—including keypoints, segmentation masks, and depth maps—from multimodal foundation models (e.g., SAM, Depth-Anything) during training, yet operates entirely self-contained at inference time. Furthermore, we introduce the first gradient-guided conditional diffusion model explicitly designed for bimanual interaction, enabling end-to-end correction from interpenetrating to physically plausible, non-interpenetrating hand poses. Evaluated on InterHand2.6M, FreiHAND, and HIC, our method achieves state-of-the-art performance: it reduces relative inter-hand error by 21.3% under occlusion and lowers interpenetration rate to 1.2%, significantly improving interaction plausibility and spatial consistency.

Technology Category

Application Category

📝 Abstract
Two-hand reconstruction from monocular images faces persistent challenges due to complex and dynamic hand postures and occlusions, causing significant difficulty in achieving plausible interaction alignment. Existing approaches struggle with such alignment issues, often resulting in misalignment and penetration artifacts. To tackle this, we propose a novel framework that attempts to precisely align hand poses and interactions by synergistically integrating foundation model-driven 2D priors with diffusion-based interaction refinement for occlusion-resistant two-hand reconstruction. First, we introduce a Fusion Alignment Encoder that learns to align fused multimodal priors keypoints, segmentation maps, and depth cues from foundation models during training. This provides robust structured guidance, further enabling efficient inference without foundation models at test time while maintaining high reconstruction accuracy. Second, we employ a two-hand diffusion model explicitly trained to transform interpenetrated poses into plausible, non-penetrated interactions, leveraging gradient-guided denoising to correct artifacts and ensure realistic spatial relations. Extensive evaluations demonstrate that our method achieves state-of-the-art performance on InterHand2.6M, FreiHAND, and HIC datasets, significantly advancing occlusion handling and interaction robustness.
Problem

Research questions and friction points this paper is trying to address.

Overcoming occlusion challenges in two-hand reconstruction from images
Aligning hand poses to prevent misalignment and penetration artifacts
Integrating foundation models and diffusion for robust interaction refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fusion Alignment Encoder integrates multimodal priors
Two-hand diffusion model corrects interpenetrated poses
Gradient-guided denoising ensures realistic spatial relations
🔎 Similar Papers
No similar papers found.