SesaHand: Enhancing 3D Hand Reconstruction via Controllable Generation with Semantic and Structural Alignment

๐Ÿ“… 2026-02-27
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing 3D hand reconstruction methods often rely on synthetic data, lacking realistic textures, environmental diversity, and contextual information involving arms or interacting objects. Meanwhile, generative models frequently suffer from insufficient alignment between semantics and structure. To address these limitations, this work proposes a controllable generation framework that leverages a vision-language model to extract action semantics for enhanced human-context awareness. The framework incorporates a chain-of-thoughtโ€“driven semantic alignment mechanism, a hierarchical structural fusion strategy, and a hand-structure attention module to jointly optimize semantic guidance and structural constraints. Experimental results demonstrate that the proposed method significantly outperforms existing approaches in terms of human relevance, structural fidelity, and 3D hand reconstruction accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent studies on 3D hand reconstruction have demonstrated the effectiveness of synthetic training data to improve estimation performance. However, most methods rely on game engines to synthesize hand images, which often lack diversity in textures and environments, and fail to include crucial components like arms or interacting objects. Generative models are promising alternatives to generate diverse hand images, but still suffer from misalignment issues. In this paper, we present SesaHand, which enhances controllable hand image generation from both semantic and structural alignment perspectives for 3D hand reconstruction. Specifically, for semantic alignment, we propose a pipeline with Chain-of-Thought inference to extract human behavior semantics from image captions generated by the Vision-Language Model. This semantics suppresses human-irrelevant environmental details and ensures sufficient human-centric contexts for hand image generation. For structural alignment, we introduce hierarchical structural fusion to integrate structural information with different granularity for feature refinement to better align the hand and the overall human body in generated images. We further propose a hand structure attention enhancement method to efficiently enhance the model's attention on hand regions. Experiments demonstrate that our method not only outperforms prior work in generation performance but also improves 3D hand reconstruction with the generated hand images.
Problem

Research questions and friction points this paper is trying to address.

3D hand reconstruction
synthetic data
semantic alignment
structural alignment
hand image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

controllable generation
semantic alignment
structural alignment
3D hand reconstruction
hierarchical structural fusion
๐Ÿ”Ž Similar Papers