CLIP-FTI: Fine-Grained Face Template Inversion via CLIP-Driven Attribute Conditioning

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing face template inversion methods suffer from blurred facial details and poor cross-model transferability, posing significant privacy risks upon template leakage. To address this, we propose a CLIP-guided fine-grained face inversion framework. Our approach is the first to leverage CLIP’s semantic embeddings to explicitly constrain the reconstruction of local facial attributes—such as eyes, nose, and mouth—ensuring structural fidelity. We design a cross-modal interaction network that jointly fuses template features with textual semantic representations, and project the fused representation into the intermediate latent space of StyleGAN for high-fidelity image synthesis. Extensive experiments across multiple benchmarks demonstrate substantial improvements: identity preservation increases by 12.3% in recognition accuracy; part-level similarity improves by 28.7%; and cross-model attack transferability is significantly enhanced. Our method achieves state-of-the-art performance in template-based face inversion.

Technology Category

Application Category

📝 Abstract
Face recognition systems store face templates for efficient matching. Once leaked, these templates pose a threat: inverting them can yield photorealistic surrogates that compromise privacy and enable impersonation. Although existing research has achieved relatively realistic face template inversion, the reconstructed facial images exhibit over-smoothed facial-part attributes (eyes, nose, mouth) and limited transferability. To address this problem, we present CLIP-FTI, a CLIP-driven fine-grained attribute conditioning framework for face template inversion. Our core idea is to use the CLIP model to obtain the semantic embeddings of facial features, in order to realize the reconstruction of specific facial feature attributes. Specifically, facial feature attribute embeddings extracted from CLIP are fused with the leaked template via a cross-modal feature interaction network and projected into the intermediate latent space of a pretrained StyleGAN. The StyleGAN generator then synthesizes face images with the same identity as the templates but with more fine-grained facial feature attributes. Experiments across multiple face recognition backbones and datasets show that our reconstructions (i) achieve higher identification accuracy and attribute similarity, (ii) recover sharper component-level attribute semantics, and (iii) improve cross-model attack transferability compared to prior reconstruction attacks. To the best of our knowledge, ours is the first method to use additional information besides the face template attack to realize face template inversion and obtains SOTA results.
Problem

Research questions and friction points this paper is trying to address.

Inverts face templates to reconstruct photorealistic facial images
Addresses over-smoothed facial attributes and limited transferability in reconstructions
Enhances identity accuracy and cross-model attack effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

CLIP-driven attribute conditioning for fine-grained face reconstruction
Cross-modal feature fusion with StyleGAN for identity-preserving synthesis
Enhanced transferability and accuracy via semantic attribute embeddings
🔎 Similar Papers
L
Longchen Dai
College of Cyber Security, Jinan University
Z
Zixuan Shen
College of Cyber Security, Jinan University
Zhiheng Zhou
Zhiheng Zhou
Center for Mind and Brain, University of California, Davis
P
Peipeng Yu
College of Cyber Security, Jinan University
Zhihua Xia
Zhihua Xia
Jinan University
Digital Forensics