🤖 AI Summary
Accurate 3D hand pose reconstruction under severe occlusion—e.g., large-scale object coverage of the hand—remains a challenging open problem. Method: We propose a function-cognition-guided diffusion generative framework. Specifically, we leverage a vision-language model to parse the semantic functional intent of hand-object interactions (e.g., “grasping”, “supporting”) and inject this textual description as a conditioning signal into a diffusion model, thereby establishing a geometrically plausible and functionally consistent hand pose prior. The text-conditioned diffusion process then reconstructs occluded hand regions in a semantics-driven manner, enabling functional-semantic geometric completion. Contribution/Results: To our knowledge, this is the first work to explicitly incorporate vision-language-derived functional semantics into hand pose generation. Experiments on the heavily occluded HOGraspNet dataset demonstrate that our method significantly outperforms existing regression-based approaches and semantic-agnostic diffusion baselines, achieving simultaneous improvements in both pose accuracy and functional plausibility.
📝 Abstract
How can we reconstruct 3D hand poses when large portions of the hand are heavily occluded by itself or by objects? Humans often resolve such ambiguities by leveraging contextual knowledge -- such as affordances, where an object's shape and function suggest how the object is typically grasped. Inspired by this observation, we propose a generative prior for hand pose refinement guided by affordance-aware textual descriptions of hand-object interactions (HOI). Our method employs a diffusion-based generative model that learns the distribution of plausible hand poses conditioned on affordance descriptions, which are inferred from a large vision-language model (VLM). This enables the refinement of occluded regions into more accurate and functionally coherent hand poses. Extensive experiments on HOGraspNet, a 3D hand-affordance dataset with severe occlusions, demonstrate that our affordance-guided refinement significantly improves hand pose estimation over both recent regression methods and diffusion-based refinement lacking contextual reasoning.