π€ AI Summary
Existing synthetic face recognition methods preserve identity well but suffer from insufficient intra-class diversity due to contextual overfitting, limiting recognition performance. To address this, we propose UIFaceβa novel framework that (i) introduces *null-context sampling* for the first time to unlock the inherent generative diversity of diffusion models; (ii) designs a two-stage inference paradigm jointly conditioned on identity and null context; and (iii) incorporates an attention injection module to enable cross-context feature guidance. Crucially, UIFace operates without any real-face data. Empirically, using only a 50%-sized synthetic dataset, it significantly outperforms state-of-the-art methods and achieves recognition accuracy on par with models trained on real dataβe.g., on LFW and CFP-FP benchmarks. This work establishes a new paradigm for low-data-dependency, high-diversity synthetic face recognition.
π Abstract
Face recognition (FR) stands as one of the most crucial applications in computer vision. The accuracy of FR models has significantly improved in recent years due to the availability of large-scale human face datasets. However, directly using these datasets can inevitably lead to privacy and legal problems. Generating synthetic data to train FR models is a feasible solution to circumvent these issues. While existing synthetic-based face recognition methods have made significant progress in generating identity-preserving images, they are severely plagued by context overfitting, resulting in a lack of intra-class diversity of generated images and poor face recognition performance. In this paper, we propose a framework to Unleash Inherent capability of the model to enhance intra-class diversity for synthetic face recognition, shortened as UIFace. Our framework first trains a diffusion model that can perform sampling conditioned on either identity contexts or a learnable empty context. The former generates identity-preserving images but lacks variations, while the latter exploits the model's intrinsic ability to synthesize intra-class-diversified images but with random identities. Then we adopt a novel two-stage sampling strategy during inference to fully leverage the strengths of both types of contexts, resulting in images that are diverse as well as identitypreserving. Moreover, an attention injection module is introduced to further augment the intra-class variations by utilizing attention maps from the empty context to guide the sampling process in ID-conditioned generation. Experiments show that our method significantly surpasses previous approaches with even less training data and half the size of synthetic dataset. The proposed UIFace even achieves comparable performance with FR models trained on real datasets when we further increase the number of synthetic identities.