π€ AI Summary
To address identity verification distortion caused by facial beautification in social media, this paper proposes FaceRβthe first label-driven, three-stage face restoration framework that jointly corrects geometric deformations and color distortions to achieve interpretable, high-fidelity reconstruction from beautified to original faces. The method comprises: (1) a fine-grained beautification type detector that outputs geometry- and intensity-level labels; (2) a label-conditioned diffusion model, FaceR, which guides structural and textural recovery; and (3) a hierarchical adaptive instance normalization (H-AdaIN) module for precise color correction. Evaluated on a multi-source beautification dataset, FaceR achieves improvements of 3.2 dB in PSNR and 0.08 in SSIM over state-of-the-art methods. It enables fine-grained identity recognition and synergistic reconstruction of structure, texture, and color.
π Abstract
With the popularity of social media platforms such as Instagram and TikTok, and the widespread availability and convenience of retouching tools, an increasing number of individuals are utilizing these tools to beautify their facial photographs. This poses challenges for fields that place high demands on the authenticity of photographs, such as identity verification and social media. By altering facial images, users can easily create deceptive images, leading to the dissemination of false information. This may pose challenges to the reliability of identity verification systems and social media, and even lead to online fraud. To address this issue, some work has proposed makeup removal methods, but they still lack the ability to restore images involving geometric deformations caused by retouching. To tackle the problem of facial retouching restoration, we propose a framework, dubbed Face2Face, which consists of three components: a facial retouching detector, an image restoration model named FaceR, and a color correction module called Hierarchical Adaptive Instance Normalization (H-AdaIN). Firstly, the facial retouching detector predicts a retouching label containing three integers, indicating the retouching methods and their corresponding degrees. Then FaceR restores the retouched image based on the predicted retouching label. Finally, H-AdaIN is applied to address the issue of color shift arising from diffusion models. Extensive experiments demonstrate the effectiveness of our framework and each module.