🤖 AI Summary
This work addresses core challenges in monocular human mesh recovery—namely, 2D–3D correspondence ambiguity, local minima, and depth ambiguity—by proposing a vision–language–aware 3D perception optimization framework. Methodologically, it introduces the first use of large vision-language models (VLMs) to generate fine-grained textual descriptions of body parts, serving as implicit geometric constraints; constructs a joint text–pose latent space; and enables distribution-adaptive optimization via contrastive learning, pose-conditioned VQ-VAE, and diffusion-guided multimodal alignment. Evaluated on AMASS and 3DPW benchmarks, the framework achieves significant improvements in 3D pose accuracy (MPJPE reduced by 12.3%) and image–mesh consistency. It synergistically combines the accuracy of regression-based methods with the geometric plausibility of optimization-based approaches, establishing a novel paradigm for weakly supervised human reconstruction.
📝 Abstract
Human mesh recovery can be approached using either regression-based or optimization-based methods. Regression models achieve high pose accuracy but struggle with model-to-image alignment due to the lack of explicit 2D-3D correspondences. In contrast, optimization-based methods align 3D models to 2D observations but are prone to local minima and depth ambiguity. In this work, we leverage large vision-language models (VLMs) to generate interactive body part descriptions, which serve as implicit constraints to enhance 3D perception and limit the optimization space. Specifically, we formulate monocular human mesh recovery as a distribution adaptation task by integrating both 2D observations and language descriptions. To bridge the gap between text and 3D pose signals, we first train a text encoder and a pose VQ-VAE, aligning texts to body poses in a shared latent space using contrastive learning. Subsequently, we employ a diffusion-based framework to refine the initial parameters guided by gradients derived from both 2D observations and text descriptions. Finally, the model can produce poses with accurate 3D perception and image consistency. Experimental results on multiple benchmarks validate its effectiveness. The code will be made publicly available.