π€ AI Summary
Current text-to-3D human generation methods suffer from significant bottlenecks in hand/face detail fidelity, visual realism, appearance controllability, and textβ3D alignment, further hindered by the scarcity of high-quality annotated 3D data. To address these challenges, we propose the first weakly supervised closed-loop paradigm: (1) synthesizing diverse, attribute-annotated human images via a text-conditioned diffusion model; (2) mapping image features to 3D Gaussian point clouds using a Transformer; and (3) training a conditional point cloud diffusion model for fine-grained reconstruction. Our approach innovatively integrates Gaussian Splatting rendering with diffusion modeling, achieving both high rendering efficiency and substantially improved geometric accuracy and semantic consistency. Compared to state-of-the-art methods, our framework achieves superior textβ3D alignment, visual realism, and rendering quality, while accelerating inference by an order of magnitude. We publicly release our code and dataset.
π Abstract
3D human generation is an important problem with a wide range of applications in computer vision and graphics. Despite recent progress in generative AI such as diffusion models or rendering methods like Neural Radiance Fields or Gaussian Splatting, controlling the generation of accurate 3D humans from text prompts remains an open challenge. Current methods struggle with fine detail, accurate rendering of hands and faces, human realism, and controlability over appearance. The lack of diversity, realism, and annotation in human image data also remains a challenge, hindering the development of a foundational 3D human model. We present a weakly supervised pipeline that tries to address these challenges. In the first step, we generate a photorealistic human image dataset with controllable attributes such as appearance, race, gender, etc using a state-of-the-art image diffusion model. Next, we propose an efficient mapping approach from image features to 3D point clouds using a transformer-based architecture. Finally, we close the loop by training a point-cloud diffusion model that is conditioned on the same text prompts used to generate the original samples. We demonstrate orders-of-magnitude speed-ups in 3D human generation compared to the state-of-the-art approaches, along with significantly improved text-prompt alignment, realism, and rendering quality. We will make the code and dataset available.