FaceMe: Robust Blind Face Restoration with Personal Identification

📅 2025-01-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe identity distortion in blind face super-resolution, this paper proposes an identity-encoder-driven personalized diffusion framework. The method extracts robust identity features from a single or multiple reference images—either real or synthetically generated—and employs them as conditional guidance for the diffusion model to achieve high-fidelity, identity-consistent reconstruction. Key contributions include: (i) a lightweight identity encoder; (ii) an adaptive multi-reference feature fusion mechanism; and (iii) a reference image construction pipeline supporting pose and expression modeling. Crucially, the framework enables flexible identity switching without fine-tuning. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, with significant improvements in identity fidelity and visual quality. Moreover, the method exhibits strong robustness against noise, occlusion, and large pose variations.

Technology Category

Application Category

📝 Abstract
Blind face restoration is a highly ill-posed problem due to the lack of necessary context. Although existing methods produce high-quality outputs, they often fail to faithfully preserve the individual's identity. In this paper, we propose a personalized face restoration method, FaceMe, based on a diffusion model. Given a single or a few reference images, we use an identity encoder to extract identity-related features, which serve as prompts to guide the diffusion model in restoring high-quality and identity-consistent facial images. By simply combining identity-related features, we effectively minimize the impact of identity-irrelevant features during training and support any number of reference image inputs during inference. Additionally, thanks to the robustness of the identity encoder, synthesized images can be used as reference images during training, and identity changing during inference does not require fine-tuning the model. We also propose a pipeline for constructing a reference image training pool that simulates the poses and expressions that may appear in real-world scenarios. Experimental results demonstrate that our FaceMe can restore high-quality facial images while maintaining identity consistency, achieving excellent performance and robustness.
Problem

Research questions and friction points this paper is trying to address.

Face Restoration
Individual Facial Characteristics
Image Quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Face Restoration
Diffusion Model
Personalized Feature Preservation
🔎 Similar Papers
No similar papers found.
S
Siyu Liu
VCIP, CS, Nankai University
Zheng-Peng Duan
Zheng-Peng Duan
Nankai University
Computer Vision
O
OuYang Jia
Samsung Research, China, Beijing (SRC-B)
Jiayi Fu
Jiayi Fu
Nankai University
H
Hyunhee Park
The Department of Camera Innovation Group, Samsung Electronics
Z
Zikun Liu
Samsung Research, China, Beijing (SRC-B)
C
Chun-Le Guo
VCIP, CS, Nankai University, NKIARI, Shenzhen Futian
Chongyi Li
Chongyi Li
Professor, Nankai University
Computer VisionComputational ImagingComputational PhotographyUnderwater Imaging