🤖 AI Summary
This work addresses the challenging problem of 3D hand pose estimation in RGB images under hand–object interaction and severe inter-hand occlusion. To enhance robustness in occluded regions, we propose a hand-identity-aware cross-attention Transformer framework that explicitly models joint-to-hand assignment. Our method comprises four key components: (i) CNN-based coarse hand localization; (ii) a context-enhancement module for richer spatial-temporal feature representation; (iii) hand-identity-aware self-attention to disambiguate occluded joints via identity-specific attention biases; and (iv) a hand–object pose-fused cross-attention decoder that jointly refines hand poses conditioned on object geometry. Evaluated on InterHand2.6M, HO3D, and H₂O3D, our approach achieves state-of-the-art performance, significantly improving 3D keypoint accuracy and hand-identity consistency under occlusion. Notably, it is the first method to enable fine-grained, hand-identity-guided occlusion-aware pose decoding.
📝 Abstract
Occlusion is one of the challenging issues when estimating 3D hand pose. This problem becomes more prominent when hand interacts with an object or two hands are involved. In the past works, much attention has not been given to these occluded regions. But these regions contain important and beneficial information that is vital for 3D hand pose estimation. Thus, in this paper, we propose an occlusion robust and accurate method for the estimation of 3D hand-object pose from the input RGB image. Our method includes first localising the hand joints using a CNN based model and then refining them by extracting contextual information. The self attention transformer then identifies the specific joints along with the hand identity. This helps the model to identify the hand belongingness of a particular joint which helps to detect the joint even in the occluded region. Further, these joints with hand identity are then used to estimate the pose using cross attention mechanism. Thus, by identifying the joints in the occluded region, the obtained network becomes robust to occlusion. Hence, this network achieves state-of-the-art results when evaluated on the InterHand2.6M, HO3D and H$_2$O3D datasets.