🤖 AI Summary
This work addresses the challenge of unnatural 3D human reconstructions from single-view images under complex or dynamic poses, a limitation largely attributed to insufficient pose diversity in existing 3D datasets. To overcome this, the authors propose DrPose, a method that directly fine-tunes a multi-view diffusion model using only single-view images and their corresponding 2D pose annotations through reward-based learning. The key innovations include a differentiable pose consistency reward function, PoseScore, and DrPose15K, a large-scale synthetic pose dataset that requires no ground-truth 3D labels. Experimental results demonstrate that DrPose significantly improves both the naturalness and accuracy of 3D reconstructions across standard benchmarks, in-the-wild images, and a newly introduced challenging pose test set.
📝 Abstract
Single-view 3D human reconstruction has achieved remarkable progress through the adoption of multi-view diffusion models, yet the recovered 3D humans often exhibit unnatural poses. This phenomenon becomes pronounced when reconstructing 3D humans with dynamic or challenging poses, which we attribute to the limited scale of available 3D human datasets with diverse poses. To address this limitation, we introduce DrPose, Direct Reward fine-tuning algorithm on Poses, which enables post-training of a multi-view diffusion model on diverse poses without requiring expensive 3D human assets. DrPose trains a model using only human poses paired with single-view images, employing a direct reward fine-tuning to maximize PoseScore, which is our proposed differentiable reward that quantifies consistency between a generated multi-view latent image and a ground-truth human pose. This optimization is conducted on DrPose15K, a novel dataset that was constructed from an existing human motion dataset and a pose-conditioned video generative model. Constructed from abundant human pose sequence data, DrPose15K exhibits a broader pose distribution compared to existing 3D human datasets. We validate our approach through evaluation on conventional benchmark datasets, in-the-wild images, and a newly constructed benchmark, with a particular focus on assessing performance on challenging human poses. Our results demonstrate consistent qualitative and quantitative improvements across all benchmarks. Project page: https://seunguk-do.github.io/drpose.