Vec2Face: Scaling Face Dataset Generation with Loosely Constrained Vectors

📅 2024-09-04
🏛️ arXiv.org
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
To address the lack of inter-class identity separation and intra-class attribute diversity in synthetic face data for face recognition training, this paper proposes an end-to-end controllable face generation framework. It is the first to enable simultaneous identity synthesis and fine-grained attribute editing solely from random latent vectors. The method employs a feature-masking autoencoder-decoder architecture, enforces vector similarity constraints to ensure separation across up to 300K identities, models intra-class variation via small latent perturbations, and achieves targeted attribute editing through gradient-based optimization. We introduce HSFace—a scalable synthetic dataset spanning 10K to 300K identities—that significantly improves model generalization. Evaluated on five benchmarks, models trained on HSFace achieve 92.0%–93.52% accuracy on LFW and, for the first time, surpass performance of models trained on real-data counterparts of comparable scale on CALFW, IJBB, and IJBC.

Technology Category

Application Category

📝 Abstract
This paper studies how to synthesize face images of non-existent persons, to create a dataset that allows effective training of face recognition (FR) models. Besides generating realistic face images, two other important goals are: 1) the ability to generate a large number of distinct identities (inter-class separation), and 2) a proper variation in appearance of the images for each identity (intra-class variation). However, existing works 1) are typically limited in how many well-separated identities can be generated and 2) either neglect or use an external model for attribute augmentation. We propose Vec2Face, a holistic model that uses only a sampled vector as input and can flexibly generate and control the identity of face images and their attributes. Composed of a feature masked autoencoder and an image decoder, Vec2Face is supervised by face image reconstruction and can be conveniently used in inference. Using vectors with low similarity among themselves as inputs, Vec2Face generates well-separated identities. Randomly perturbing an input identity vector within a small range allows Vec2Face to generate faces of the same identity with proper variation in face attributes. It is also possible to generate images with designated attributes by adjusting vector values with a gradient descent method. Vec2Face has efficiently synthesized as many as 300K identities, whereas 60K is the largest number of identities created in the previous works. As for performance, FR models trained with the generated HSFace datasets, from 10k to 300k identities, achieve state-of-the-art accuracy, from 92% to 93.52%, on five real-world test sets (emph{i.e.}, LFW, CFP-FP, AgeDB-30, CALFW, and CPLFW). For the first time, the FR model trained using our synthetic training set achieves higher accuracy than that trained using a same-scale training set of real face images on the CALFW, IJBB, and IJBC test sets.
Problem

Research questions and friction points this paper is trying to address.

Generating diverse synthetic face images
Enhancing face recognition model accuracy
Achieving large-scale identity variation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vec2Face generates diverse face identities
Uses vectors for controlled attribute variation
Achieves state-of-the-art face recognition accuracy
🔎 Similar Papers
No similar papers found.