🤖 AI Summary
This work addresses the challenge of full-body 3D human reconstruction from single in-the-wild “outfit-of-the-day” (OOTD) images, where severe pose variation, occlusions, and cluttered backgrounds impede accurate modeling. To this end, we propose an end-to-end framework for holistic appearance modeling. Methodologically: (i) we bypass image decomposition and directly optimize a Neural Radiance Field (NeRF) representation; (ii) we introduce a Conditional Prior Preservation Loss (CPPL) to mitigate language drift during few-shot fine-tuning; and (iii) we integrate SMPL-X canonical-space sampling with multi-resolution 3D semantic distillation sampling (3D-SDS) to enhance geometric and textural fidelity. Our method reconstructs high-fidelity avatars from a single OOTD image in under five minutes—48× faster than state-of-the-art approaches—while significantly outperforming prior work in detail recovery, occlusion robustness, and cross-pose consistency. The resulting models support photorealistic virtual try-on and animation-driven rendering.
📝 Abstract
We propose PFAvatar (Pose-Fusion Avatar), a new method that reconstructs high-quality 3D avatars from ``Outfit of the Day'' (OOTD) photos, which exhibit diverse poses, occlusions, and complex backgrounds. Our method consists of two stages: (1) fine-tuning a pose-aware diffusion model from few-shot OOTD examples and (2) distilling a 3D avatar represented by a neural radiance field (NeRF). In the first stage, unlike previous methods that segment images into assets (e.g., garments, accessories) for 3D assembly, which is prone to inconsistency, we avoid decomposition and directly model the full-body appearance. By integrating a pre-trained ControlNet for pose estimation and a novel Condition Prior Preservation Loss (CPPL), our method enables end-to-end learning of fine details while mitigating language drift in few-shot training. Our method completes personalization in just 5 minutes, achieving a 48$ imes$ speed-up compared to previous approaches. In the second stage, we introduce a NeRF-based avatar representation optimized by canonical SMPL-X space sampling and Multi-Resolution 3D-SDS. Compared to mesh-based representations that suffer from resolution-dependent discretization and erroneous occluded geometry, our continuous radiance field can preserve high-frequency textures (e.g., hair) and handle occlusions correctly through transmittance. Experiments demonstrate that PFAvatar outperforms state-of-the-art methods in terms of reconstruction fidelity, detail preservation, and robustness to occlusions/truncations, advancing practical 3D avatar generation from real-world OOTD albums. In addition, the reconstructed 3D avatar supports downstream applications such as virtual try-on, animation, and human video reenactment, further demonstrating the versatility and practical value of our approach.