๐ค AI Summary
This work addresses a critical gap in existing privacy-preserving face recognition (PPFR) systems, which rely on pixel-level metrics such as PSNR and SSIM to assess privacy protection while overlooking the risk that identity information may still be directly extractable. To this end, we propose FaceLinkGenโan attack framework capable of performing direct identity matching and face regeneration from protected templates without reconstructing the original pixel data. For the first time, privacy leakage is evaluated through identity extraction rather than image reconstruction. FaceLinkGen achieves over 98.5% matching accuracy and 96% face regeneration success across three mainstream PPFR systems, remaining highly effective even with near-zero prior knowledge. These results expose a fundamental flaw: visual obfuscation alone cannot guarantee genuine identity privacy.
๐ Abstract
Transformation-based privacy-preserving face recognition (PPFR) aims to verify identities while hiding facial data from attackers and malicious service providers. Existing evaluations mostly treat privacy as resistance to pixel-level reconstruction, measured by PSNR and SSIM. We show that this reconstruction-centric view fails. We present FaceLinkGen, an identity extraction attack that performs linkage/matching and face regeneration directly from protected templates without recovering original pixels. On three recent PPFR systems, FaceLinkGen reaches over 98.5\% matching accuracy and above 96\% regeneration success, and still exceeds 92\% matching and 94\% regeneration in a near zero knowledge setting. These results expose a structural gap between pixel distortion metrics, which are widely used in PPFR evaluation, and real privacy. We show that visual obfuscation leaves identity information broadly exposed to both external intruders and untrusted service providers.