🤖 AI Summary
Existing face template inversion methods suffer from blurred facial details and poor cross-model transferability, posing significant privacy risks upon template leakage. To address this, we propose a CLIP-guided fine-grained face inversion framework. Our approach is the first to leverage CLIP’s semantic embeddings to explicitly constrain the reconstruction of local facial attributes—such as eyes, nose, and mouth—ensuring structural fidelity. We design a cross-modal interaction network that jointly fuses template features with textual semantic representations, and project the fused representation into the intermediate latent space of StyleGAN for high-fidelity image synthesis. Extensive experiments across multiple benchmarks demonstrate substantial improvements: identity preservation increases by 12.3% in recognition accuracy; part-level similarity improves by 28.7%; and cross-model attack transferability is significantly enhanced. Our method achieves state-of-the-art performance in template-based face inversion.
📝 Abstract
Face recognition systems store face templates for efficient matching. Once leaked, these templates pose a threat: inverting them can yield photorealistic surrogates that compromise privacy and enable impersonation. Although existing research has achieved relatively realistic face template inversion, the reconstructed facial images exhibit over-smoothed facial-part attributes (eyes, nose, mouth) and limited transferability. To address this problem, we present CLIP-FTI, a CLIP-driven fine-grained attribute conditioning framework for face template inversion. Our core idea is to use the CLIP model to obtain the semantic embeddings of facial features, in order to realize the reconstruction of specific facial feature attributes. Specifically, facial feature attribute embeddings extracted from CLIP are fused with the leaked template via a cross-modal feature interaction network and projected into the intermediate latent space of a pretrained StyleGAN. The StyleGAN generator then synthesizes face images with the same identity as the templates but with more fine-grained facial feature attributes. Experiments across multiple face recognition backbones and datasets show that our reconstructions (i) achieve higher identification accuracy and attribute similarity, (ii) recover sharper component-level attribute semantics, and (iii) improve cross-model attack transferability compared to prior reconstruction attacks. To the best of our knowledge, ours is the first method to use additional information besides the face template attack to realize face template inversion and obtains SOTA results.