🤖 AI Summary
To address insufficient generalization in open-set anomaly detection, this paper proposes a two-stage human-perception-guided pretraining framework. In Stage I, an autoencoder learns unsupervised mapping from images to human saliency maps, modeling visual attention priors; in Stage II, the encoder is frozen while only a lightweight classification head is fine-tuned. The paradigm requires no architectural modification and is plug-and-play compatible with mainstream CNN backbones (e.g., ResNet, ConvNeXt, VGG). Its core innovation lies in explicitly decoupling human perception modeling into a dedicated pretraining stage—separating representation learning from task-specific adaptation for the first time. Evaluated on three real-world tasks—iris spoofing detection, generative face recognition, and chest X-ray anomaly detection—the method consistently outperforms both ImageNet pretraining and existing perception-guided approaches, achieving significant gains in cross-domain generalization.
📝 Abstract
Leveraging human perception into training of convolutional neural networks (CNN) has boosted generalization capabilities of such models in open-set recognition tasks. One of the active research questions is where (in the model architecture or training pipeline) and how to efficiently incorporate always-limited human perceptual data into training strategies of models. In this paper, we introduce MENTOR (huMan pErceptioN-guided preTraining fOr increased geneRalization), which addresses this question through two unique rounds of training CNNs tasked with open-set anomaly detection. First, we train an autoencoder to learn human saliency maps given an input image, without any class labels. The autoencoder is thus tasked with discovering domain-specific salient features which mimic human perception. Second, we remove the decoder part, add a classification layer on top of the encoder, and train this new model conventionally, now using class labels. We show that MENTOR successfully raises the generalization performance across three different CNN backbones in a variety of anomaly detection tasks (demonstrated for detection of unknown iris presentation attacks, synthetically-generated faces, and anomalies in chest X-ray images) compared to traditional pretraining methods (e.g., sourcing the weights from ImageNet), and as well as state-of-the-art methods that incorporate human perception guidance into training. In addition, we demonstrate that MENTOR can be flexibly applied to existing human perception-guided methods and subsequently increasing their generalization with no architectural modifications.