PFAvatar: Pose-Fusion 3D Personalized Avatar Reconstruction from Real-World Outfit-of-the-Day Photos

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of full-body 3D human reconstruction from single in-the-wild “outfit-of-the-day” (OOTD) images, where severe pose variation, occlusions, and cluttered backgrounds impede accurate modeling. To this end, we propose an end-to-end framework for holistic appearance modeling. Methodologically: (i) we bypass image decomposition and directly optimize a Neural Radiance Field (NeRF) representation; (ii) we introduce a Conditional Prior Preservation Loss (CPPL) to mitigate language drift during few-shot fine-tuning; and (iii) we integrate SMPL-X canonical-space sampling with multi-resolution 3D semantic distillation sampling (3D-SDS) to enhance geometric and textural fidelity. Our method reconstructs high-fidelity avatars from a single OOTD image in under five minutes—48× faster than state-of-the-art approaches—while significantly outperforming prior work in detail recovery, occlusion robustness, and cross-pose consistency. The resulting models support photorealistic virtual try-on and animation-driven rendering.

Technology Category

Application Category

📝 Abstract
We propose PFAvatar (Pose-Fusion Avatar), a new method that reconstructs high-quality 3D avatars from ``Outfit of the Day'' (OOTD) photos, which exhibit diverse poses, occlusions, and complex backgrounds. Our method consists of two stages: (1) fine-tuning a pose-aware diffusion model from few-shot OOTD examples and (2) distilling a 3D avatar represented by a neural radiance field (NeRF). In the first stage, unlike previous methods that segment images into assets (e.g., garments, accessories) for 3D assembly, which is prone to inconsistency, we avoid decomposition and directly model the full-body appearance. By integrating a pre-trained ControlNet for pose estimation and a novel Condition Prior Preservation Loss (CPPL), our method enables end-to-end learning of fine details while mitigating language drift in few-shot training. Our method completes personalization in just 5 minutes, achieving a 48$ imes$ speed-up compared to previous approaches. In the second stage, we introduce a NeRF-based avatar representation optimized by canonical SMPL-X space sampling and Multi-Resolution 3D-SDS. Compared to mesh-based representations that suffer from resolution-dependent discretization and erroneous occluded geometry, our continuous radiance field can preserve high-frequency textures (e.g., hair) and handle occlusions correctly through transmittance. Experiments demonstrate that PFAvatar outperforms state-of-the-art methods in terms of reconstruction fidelity, detail preservation, and robustness to occlusions/truncations, advancing practical 3D avatar generation from real-world OOTD albums. In addition, the reconstructed 3D avatar supports downstream applications such as virtual try-on, animation, and human video reenactment, further demonstrating the versatility and practical value of our approach.
Problem

Research questions and friction points this paper is trying to address.

Reconstructing 3D avatars from real-world outfit photos with diverse poses
Overcoming limitations of asset-based methods prone to inconsistency
Handling occlusions and complex backgrounds in avatar reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning pose-aware diffusion model from few-shot examples
Distilling 3D avatar using neural radiance field representation
Integrating ControlNet with novel loss for end-to-end learning
🔎 Similar Papers
No similar papers found.
D
Dianbing Xi
State Key Laboratory of CAD&CG, Zhejiang University
G
Guoyuan An
Independent Contributor
Jingsen Zhu
Jingsen Zhu
Cornell University
Z
Zhijian Liu
State Key Laboratory of CAD&CG, Zhejiang University
Y
Yuan Liu
Hong Kong University of Science and Technology
Ruiyuan Zhang
Ruiyuan Zhang
Zhejiang University
MultiModal3D Part AssemblyMixture-of-Expert
J
Jiayuan Lu
State Key Laboratory of CAD&CG, Zhejiang University
Y
Yuchi Huo
State Key Laboratory of CAD&CG, Zhejiang University
R
Rui Wang
State Key Laboratory of CAD&CG, Zhejiang University