🤖 AI Summary
Facial beauty prediction (FBP) is highly subjective and relies on both holistic and fine-grained aesthetic cues, making it challenging for conventional CNN- or ViT-based models to learn human-aligned representations. To address this, we propose a two-stage generative pretraining framework: first, self-supervised denoising pretraining via a Diffusion Transformer on large-scale unlabeled face images to acquire domain-specific features attuned to subjective aesthetics; second, freezing the encoder and attaching a lightweight regression head, followed by fine-tuning on the FBP5500 dataset for beauty score prediction. Our method significantly outperforms general-purpose pretrained baselines, achieving a Pearson correlation coefficient of 0.932 on FBP5500—setting a new state-of-the-art. This work constitutes the first empirical validation of generative vision pretraining for subjective visual assessment tasks, demonstrating its effectiveness and superiority over discriminative paradigms.
📝 Abstract
Facial Beauty Prediction (FBP) is a challenging computer vision task due to its subjective nature and the subtle, holistic features that influence human perception. Prevailing methods, often based on deep convolutional networks or standard Vision Transformers pre-trained on generic object classification (e.g., ImageNet), struggle to learn feature representations that are truly aligned with high-level aesthetic assessment. In this paper, we propose a novel two-stage framework that leverages the power of generative models to create a superior, domain-specific feature extractor. In the first stage, we pre-train a Diffusion Transformer on a large-scale, unlabeled facial dataset (FFHQ) through a self-supervised denoising task. This process forces the model to learn the fundamental data distribution of human faces, capturing nuanced details and structural priors essential for aesthetic evaluation. In the second stage, the pre-trained and frozen encoder of our Diffusion Transformer is used as a backbone feature extractor, with only a lightweight regression head being fine-tuned on the target FBP dataset (FBP5500). Our method, termed Diff-FBP, sets a new state-of-the-art on the FBP5500 benchmark, achieving a Pearson Correlation Coefficient (PCC) of 0.932, significantly outperforming prior art based on general-purpose pre-training. Extensive ablation studies validate that our generative pre-training strategy is the key contributor to this performance leap, creating feature representations that are more semantically potent for subjective visual tasks.