SiMHand: Mining Similar Hands for Large-Scale 3D Hand Pose Pre-training

📅 2025-02-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of high-quality supervision for pretraining 3D hand pose estimation models in unconstrained, real-world scenarios, this paper proposes a contrastive learning framework leveraging cross-image semantically similar hand pairs. We automatically mine such pairs from over two million real-world video frames sourced from 100DOH and Ego4D. Crucially, we introduce a novel distance-adaptive weighted contrastive loss, overcoming the limitations of conventional single-image augmentation for positive pair construction and enabling more robust hand feature embedding and similarity modeling. Evaluated on FreiHand, DexYCB, and AssemblyHands, our method outperforms the state-of-the-art PeCLR by 15%, 10%, and 4%, respectively. It significantly improves 3D hand pose estimation accuracy—particularly under zero-shot and few-shot settings—demonstrating superior generalization to unseen domains and limited labeled data.

Technology Category

Application Category

📝 Abstract
We present a framework for pre-training of 3D hand pose estimation from in-the-wild hand images sharing with similar hand characteristics, dubbed SimHand. Pre-training with large-scale images achieves promising results in various tasks, but prior methods for 3D hand pose pre-training have not fully utilized the potential of diverse hand images accessible from in-the-wild videos. To facilitate scalable pre-training, we first prepare an extensive pool of hand images from in-the-wild videos and design our pre-training method with contrastive learning. Specifically, we collect over 2.0M hand images from recent human-centric videos, such as 100DOH and Ego4D. To extract discriminative information from these images, we focus on the similarity of hands: pairs of non-identical samples with similar hand poses. We then propose a novel contrastive learning method that embeds similar hand pairs closer in the feature space. Our method not only learns from similar samples but also adaptively weights the contrastive learning loss based on inter-sample distance, leading to additional performance gains. Our experiments demonstrate that our method outperforms conventional contrastive learning approaches that produce positive pairs sorely from a single image with data augmentation. We achieve significant improvements over the state-of-the-art method (PeCLR) in various datasets, with gains of 15% on FreiHand, 10% on DexYCB, and 4% on AssemblyHands. Our code is available at https://github.com/ut-vision/SiMHand.
Problem

Research questions and friction points this paper is trying to address.

Pre-training 3D hand pose estimation
Utilizing diverse in-the-wild hand images
Enhancing contrastive learning with similar hand pairs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes in-the-wild hand images
Employs novel contrastive learning method
Adaptively weights contrastive learning loss
🔎 Similar Papers