🤖 AI Summary
This paper addresses camera-parameter-free 3D hand pose estimation from a single color image. To tackle challenges including depth ambiguity, occlusion, and anatomical complexity, we propose a calibration-free optimization framework: (1) an implicit camera alignment mechanism that eliminates reliance on intrinsic camera parameters; (2) a fingertip-guided loss to significantly improve distal joint localization accuracy; and (3) an end-to-end differentiable pipeline integrating 2D keypoint supervision, articulated hand priors, and differentiable geometric constraints. Our method achieves state-of-the-art performance on the EgoDexter and Dexter+Object benchmarks and demonstrates strong generalization and robustness on in-the-wild images. The source code is publicly available.
📝 Abstract
Estimating the 3D hand articulation from a single color image is a continuously investigated problem with applications in Augmented Reality (AR), Virtual Reality (VR), Human-Computer Interaction (HCI), and robotics. Apart from the absence of depth information, occlusions, articulation complexity, and the need for camera parameters knowledge pose additional challenges. In this work, we propose an optimization pipeline for estimating the 3D hand articulation from 2D keypoint input, which includes a keypoint alignment step and a fingertip loss to overcome the need to know or estimate the camera parameters. We evaluate our approach on the EgoDexter and Dexter+Object benchmarks to showcase that our approach performs competitively with the SotA, while also demonstrating its robustness when processing"in-the-wild"images without any prior camera knowledge. Our quantitative analysis highlights the sensitivity of the 2D keypoint estimation accuracy, despite the use of hand priors. Code is available at https://github.com/cpantazop/HandRepo