Adapting Human Mesh Recovery with Vision-Language Feedback

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses core challenges in monocular human mesh recovery—namely, 2D–3D correspondence ambiguity, local minima, and depth ambiguity—by proposing a vision–language–aware 3D perception optimization framework. Methodologically, it introduces the first use of large vision-language models (VLMs) to generate fine-grained textual descriptions of body parts, serving as implicit geometric constraints; constructs a joint text–pose latent space; and enables distribution-adaptive optimization via contrastive learning, pose-conditioned VQ-VAE, and diffusion-guided multimodal alignment. Evaluated on AMASS and 3DPW benchmarks, the framework achieves significant improvements in 3D pose accuracy (MPJPE reduced by 12.3%) and image–mesh consistency. It synergistically combines the accuracy of regression-based methods with the geometric plausibility of optimization-based approaches, establishing a novel paradigm for weakly supervised human reconstruction.

Technology Category

Application Category

📝 Abstract
Human mesh recovery can be approached using either regression-based or optimization-based methods. Regression models achieve high pose accuracy but struggle with model-to-image alignment due to the lack of explicit 2D-3D correspondences. In contrast, optimization-based methods align 3D models to 2D observations but are prone to local minima and depth ambiguity. In this work, we leverage large vision-language models (VLMs) to generate interactive body part descriptions, which serve as implicit constraints to enhance 3D perception and limit the optimization space. Specifically, we formulate monocular human mesh recovery as a distribution adaptation task by integrating both 2D observations and language descriptions. To bridge the gap between text and 3D pose signals, we first train a text encoder and a pose VQ-VAE, aligning texts to body poses in a shared latent space using contrastive learning. Subsequently, we employ a diffusion-based framework to refine the initial parameters guided by gradients derived from both 2D observations and text descriptions. Finally, the model can produce poses with accurate 3D perception and image consistency. Experimental results on multiple benchmarks validate its effectiveness. The code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Enhance 3D human mesh recovery
Integrate vision-language feedback
Improve model-to-image alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision-Language Feedback enhances 3D perception
Contrastive Learning aligns texts and body poses
Diffusion Framework refines initial model parameters
🔎 Similar Papers
No similar papers found.
C
Chongyang Xu
College of Computer Science, Sichuan University, Chengdu 610065, China
Buzhen Huang
Buzhen Huang
Southeast University
Computer VisionComputer Graphics
C
Chengfang Zhang
Intelligent Policing Key Laboratory of Sichuan Province, Sichuan Police College, Luzhou, 646000, China
Z
Ziliang Feng
College of Computer Science, Sichuan University, Chengdu 610065, China
Yangang Wang
Yangang Wang
Professor, Southeast University
Computer graphicsComputer visionComputational photography