UIFace: Unleashing Inherent Model Capabilities to Enhance Intra-Class Diversity in Synthetic Face Recognition

πŸ“… 2025-02-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing synthetic face recognition methods preserve identity well but suffer from insufficient intra-class diversity due to contextual overfitting, limiting recognition performance. To address this, we propose UIFaceβ€”a novel framework that (i) introduces *null-context sampling* for the first time to unlock the inherent generative diversity of diffusion models; (ii) designs a two-stage inference paradigm jointly conditioned on identity and null context; and (iii) incorporates an attention injection module to enable cross-context feature guidance. Crucially, UIFace operates without any real-face data. Empirically, using only a 50%-sized synthetic dataset, it significantly outperforms state-of-the-art methods and achieves recognition accuracy on par with models trained on real dataβ€”e.g., on LFW and CFP-FP benchmarks. This work establishes a new paradigm for low-data-dependency, high-diversity synthetic face recognition.

Technology Category

Application Category

πŸ“ Abstract
Face recognition (FR) stands as one of the most crucial applications in computer vision. The accuracy of FR models has significantly improved in recent years due to the availability of large-scale human face datasets. However, directly using these datasets can inevitably lead to privacy and legal problems. Generating synthetic data to train FR models is a feasible solution to circumvent these issues. While existing synthetic-based face recognition methods have made significant progress in generating identity-preserving images, they are severely plagued by context overfitting, resulting in a lack of intra-class diversity of generated images and poor face recognition performance. In this paper, we propose a framework to Unleash Inherent capability of the model to enhance intra-class diversity for synthetic face recognition, shortened as UIFace. Our framework first trains a diffusion model that can perform sampling conditioned on either identity contexts or a learnable empty context. The former generates identity-preserving images but lacks variations, while the latter exploits the model's intrinsic ability to synthesize intra-class-diversified images but with random identities. Then we adopt a novel two-stage sampling strategy during inference to fully leverage the strengths of both types of contexts, resulting in images that are diverse as well as identitypreserving. Moreover, an attention injection module is introduced to further augment the intra-class variations by utilizing attention maps from the empty context to guide the sampling process in ID-conditioned generation. Experiments show that our method significantly surpasses previous approaches with even less training data and half the size of synthetic dataset. The proposed UIFace even achieves comparable performance with FR models trained on real datasets when we further increase the number of synthetic identities.
Problem

Research questions and friction points this paper is trying to address.

Enhance intra-class diversity in synthetic face recognition.
Address context overfitting in synthetic face generation.
Improve face recognition performance with synthetic data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage sampling strategy
Attention injection module
Diffusion model training
πŸ”Ž Similar Papers
No similar papers found.
X
Xiao Lin
Tencent Youtu Lab
Y
Yuge Huang
Tencent Youtu Lab
J
Jianqing Xu
Tencent Youtu Lab
Yuxi Mi
Yuxi Mi
Fudan University
Face RecognitionPrivacyBiometricsComputer Vision
Shuigeng Zhou
Shuigeng Zhou
Fudan University
DatabaseBioinformaticsMachine Learning
S
Shouhong Ding
Tencent Youtu Lab