🤖 AI Summary
This work addresses the lack of a unified and efficient backbone architecture for facial analysis by proposing a joint training framework that integrates Masked Autoencoders (MAE) as a self-supervised auxiliary task within the L-SSAT paradigm. The study systematically evaluates the compatibility of various backbone networks—ranging from shallow to deep—with fused local texture descriptors. Experimental results demonstrate that no single backbone universally outperforms others across tasks; instead, optimal backbone selection is inherently task-dependent. The approach achieves average accuracies of 0.94, 0.87, and 0.88 on FaceForensics++, CelebA, and AffectNet, respectively, underscoring the critical role of task-specific backbone design in enhancing model robustness and discriminative power.
📝 Abstract
In this work, we benchmark with different backbones and study their impact for self-supervised learning (SSL) as an auxiliary task to blend texture-based local descriptors into feature modelling for efficient face analysis. It is established in previous work that combining a primary task and a self-supervised auxiliary task enables more robust and discriminative representation learning.
We employed different shallow to deep backbones for the SSL task of Masked Auto-Encoder (MAE) as an auxiliary objective to reconstruct texture features such as local patterns alongside the primary task in local pattern SSAT (L-SSAT), ensuring robust and unbiased face analysis.
To expand the benchmark, we conducted a comprehensive comparative analysis across multiple model configurations within the proposed framework. To this end, we address the three research questions: "What is the role of the backbone in performance L-SSAT?", "What type of backbone is effective for different face analysis tasks?", and "Is there any generalized backbone for effective face analysis with L-SSAT?".
Towards answering these questions, we provide a detailed study and experiments. The performance evaluation demonstrates that the backbone for the proposed method is highly dependent on the downstream task, achieving average accuracies of 0.94 on FaceForensics++, 0.87 on CelebA, and 0.88 on AffectNet.
For consistency of feature representation quality and generalisation capability across various face analysis paradigms, including face attribute prediction, emotion classification, and deepfake detection, there is no unified backbone.