PMMD: A pose-guided multi-view multi-modal diffusion for person generation

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor pose controllability, low appearance consistency, challenging occlusion handling, and garment-style drift in virtual try-on and digital human generation, this paper proposes the first multimodal diffusion framework tailored for multi-view human image synthesis. Methodologically, we introduce a ResCVA module for local detail enhancement and a cross-modal fusion mechanism to jointly condition synthesis on pose maps, multi-view reference images, and text prompts. A residual conditional variational autoencoder structure is incorporated to improve identity fidelity, while a full-pipeline text–image semantic alignment module ensures robust cross-modal consistency. Evaluated on the DeepFashion MultiModal dataset, our method achieves state-of-the-art performance in pose alignment accuracy, garment detail preservation, and text controllability—significantly outperforming existing mainstream approaches.

Technology Category

Application Category

📝 Abstract
Generating consistent human images with controllable pose and appearance is essential for applications in virtual try on, image editing, and digital human creation. Current methods often suffer from occlusions, garment style drift, and pose misalignment. We propose Pose-guided Multi-view Multimodal Diffusion (PMMD), a diffusion framework that synthesizes photorealistic person images conditioned on multi-view references, pose maps, and text prompts. A multimodal encoder jointly models visual views, pose features, and semantic descriptions, which reduces cross modal discrepancy and improves identity fidelity. We further design a ResCVA module to enhance local detail while preserving global structure, and a cross modal fusion module that integrates image semantics with text throughout the denoising pipeline. Experiments on the DeepFashion MultiModal dataset show that PMMD outperforms representative baselines in consistency, detail preservation, and controllability. Project page and code are available at https://github.com/ZANMANGLOOPYE/PMMD.
Problem

Research questions and friction points this paper is trying to address.

Generates consistent human images with controllable pose and appearance
Addresses occlusions, garment style drift, and pose misalignment issues
Synthesizes photorealistic person images using multi-view references and text prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pose-guided multi-view multimodal diffusion framework
Multimodal encoder reduces cross-modal discrepancy
ResCVA module enhances local detail and global structure