🤖 AI Summary
This study systematically evaluates the suitability of AI-generated facial de-occlusion images for biometric identity matching, addressing the risk of identity misidentification arising from public misuse. For the first time, a large-scale empirical analysis is conducted by integrating mainstream commercial AI de-occlusion models, state-of-the-art face matching algorithms, and extensive real-world facial datasets, enabling both quantitative and qualitative assessment. The findings demonstrate that AI-generated de-occluded faces fail to reliably reconstruct true identities, exhibiting unacceptably high false identification rates, and are therefore unsuitable for any formal biometric application. This work fills a critical gap in empirical research on this topic and provides essential risk warnings for both the general public and law enforcement agencies.
📝 Abstract
Recently, crowd-sourced online criminal investigations have used generative-AI to enhance low-quality visual evidence. In one high-profile case, social-media users circulated an "AI-unmasked" image of a federal agent involved in a fatal shooting, fueling a wide-spread misidentification. In response to this and similar incidents, we conducted a large-scale analysis evaluating the efficacy and risks of commercial AI-powered facial unmasking, specifically assessing whether the resulting faces can be reliably matched to true identities.