🤖 AI Summary
Portrait image restoration is challenging when images suffer from both generic degradations and human-motion blur (HMB) during transmission. Method: This paper proposes the first single-step diffusion-based portrait restoration model. We design a joint degradation simulation pipeline and introduce a novel triple-branch, dual-prompt guidance mechanism—integrating a high-quality reference image, residual noise, and an HMB semantic segmentation mask—to generate adaptive positive/negative prompt pairs, significantly enhancing the robustness of classifier-free guidance in single-step diffusion. Our approach synergistically combines diffusion modeling, multimodal prompt engineering, and synthetic-data-driven training. Results: Evaluated on our newly constructed MPII-Test benchmark and multiple real/synthetic datasets, the method achieves state-of-the-art PSNR and SSIM scores while producing more natural and visually sharp restorations.
📝 Abstract
Human-centered images often suffer from severe generic degradation during transmission and are prone to human motion blur (HMB), making restoration challenging. Existing research lacks sufficient focus on these issues, as both problems often coexist in practice. To address this, we design a degradation pipeline that simulates the coexistence of HMB and generic noise, generating synthetic degraded data to train our proposed HAODiff, a human-aware one-step diffusion. Specifically, we propose a triple-branch dual-prompt guidance (DPG), which leverages high-quality images, residual noise (LQ minus HQ), and HMB segmentation masks as training targets. It produces a positive-negative prompt pair for classifier-free guidance (CFG) in a single diffusion step. The resulting adaptive dual prompts let HAODiff exploit CFG more effectively, boosting robustness against diverse degradations. For fair evaluation, we introduce MPII-Test, a benchmark rich in combined noise and HMB cases. Extensive experiments show that our HAODiff surpasses existing state-of-the-art (SOTA) methods in terms of both quantitative metrics and visual quality on synthetic and real-world datasets, including our introduced MPII-Test. Code is available at: https://github.com/gobunu/HAODiff.