🤖 AI Summary
This work addresses the severe occlusion challenges in 3D hand pose estimation caused by self-occlusion and object interaction by introducing the state-space model Mamba to this task for the first time. The authors propose a multimodal point-cloud-driven correspondence modeling framework that dynamically learns the topological structure of hand keypoints under occlusion through a local information injection and filtering module, while integrating multimodal image features to enrich input representations. Evaluated on three benchmark datasets, the method significantly outperforms existing state-of-the-art approaches, demonstrating exceptional robustness, accuracy, and generalization capability—particularly in scenarios involving severe occlusion.
📝 Abstract
3D hand pose estimation that involves accurate estimation of 3D human hand keypoint locations is crucial for many human-computer interaction applications such as augmented reality. However, this task poses significant challenges due to self-occlusion of the hands and occlusions caused by interactions with objects. In this paper, we propose HandMCM to address these challenges. Our HandMCM is a novel method based on the powerful state space model (Mamba). By incorporating modules for local information injection/filtering and correspondence modeling, the proposed correspondence Mamba effectively learns the highly dynamic kinematic topology of keypoints across various occlusion scenarios. Moreover, by integrating multi-modal image features, we enhance the robustness and representational capacity of the input, leading to more accurate hand pose estimation. Empirical evaluations on three benchmark datasets demonstrate that our model significantly outperforms current state-of-the-art methods, particularly in challenging scenarios involving severe occlusions. These results highlight the potential of our approach to advance the accuracy and reliability of 3D hand pose estimation in practical applications.