Representation Learning and Identity Adversarial Training for Facial Behavior Understanding

📅 2024-07-15
🏛️ arXiv.org
📈 Citations: 4
Influential: 3
📄 PDF
🤖 AI Summary
This work addresses shortcut learning in facial Action Unit (AU) detection caused by insufficient data diversity and subject identity bias. To mitigate this, we propose the Face9M+FMAE+IAT framework. First, we systematically characterize how subject identity information interferes with AU prediction. Second, we introduce Face9M—the first large-scale, multi-source facial dataset comprising 9 million images—to enhance data diversity. Third, we design the Facial Masked Autoencoder (FMAE) for self-supervised pretraining on Face9M and integrate Identity-Adversarial Training (IAT) to enforce learning of identity-invariant, AU-discriminative representations. Our method achieves state-of-the-art F1 scores of 67.1%, 66.8%, and 70.1% on BP4D, BP4D+, and DISFA, respectively—surpassing prior methods at the time of publication. All code and pretrained models are publicly released.

Technology Category

Application Category

📝 Abstract
Facial Action Unit (AU) detection has gained significant attention as it enables the breakdown of complex facial expressions into individual muscle movements. In this paper, we revisit two fundamental factors in AU detection: diverse and large-scale data and subject identity regularization. Motivated by recent advances in foundation models, we highlight the importance of data and introduce Face9M, a diverse dataset comprising 9 million facial images from multiple public sources. Pretraining a masked autoencoder on Face9M yields strong performance in AU detection and facial expression tasks. More importantly, we emphasize that the Identity Adversarial Training (IAT) has not been well explored in AU tasks. To fill this gap, we first show that subject identity in AU datasets creates shortcut learning for the model and leads to sub-optimal solutions to AU predictions. Secondly, we demonstrate that strong IAT regularization is necessary to learn identity-invariant features. Finally, we elucidate the design space of IAT and empirically show that IAT circumvents the identity-based shortcut learning and results in a better solution. Our proposed methods, Facial Masked Autoencoder (FMAE) and IAT, are simple, generic and effective. Remarkably, the proposed FMAE-IAT approach achieves new state-of-the-art F1 scores on BP4D (67.1%), BP4D+ (66.8%), and DISFA (70.1%) databases, significantly outperforming previous work. We release the code and model at https://github.com/forever208/FMAE-IAT.
Problem

Research questions and friction points this paper is trying to address.

Improving Facial Action Unit detection via large-scale diverse data
Addressing identity-based shortcut learning in AU prediction models
Enhancing identity-invariant features through adversarial training regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Face9M dataset with 9 million facial images
Employs Facial Masked Autoencoder (FMAE) for pretraining
Applies Identity Adversarial Training (IAT) for regularization
🔎 Similar Papers
No similar papers found.