Rethinking the Learning Paradigm for Facial Expression Recognition

📅 2022-09-30
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Real-world facial expression recognition (FER) suffers from label ambiguity arising from subjective crowd-sourced annotations and inter-class similarity. Conventional approaches typically enforce hard, one-hot labels—ignoring inherent annotation uncertainty—and rely on strong supervision. This paper proposes a weakly supervised FER framework explicitly designed for raw ambiguous labels, offering the first systematic validation of its feasibility and superiority. Our method directly leverages label distributions via three key components: (i) explicit label distribution modeling, (ii) an uncertainty-aware loss function, and (iii) distribution-level knowledge distillation—all enabling end-to-end learning without label discretization. Evaluated on real-world ambiguously annotated benchmarks—including RAF-DB and AffectNet—our model achieves an average accuracy gain of 3.2% over strong supervised baselines. Moreover, it demonstrates significantly improved generalization, robustness to distribution shifts, and prediction calibration.
📝 Abstract
Due to the subjective crowdsourcing annotations and the inherent inter-class similarity of facial expressions, the real-world Facial Expression Recognition (FER) datasets usually exhibit ambiguous annotation. To simplify the learning paradigm, most previous methods convert ambiguous annotation results into precise one-hot annotations and train FER models in an end-to-end supervised manner. In this paper, we rethink the existing training paradigm and propose that it is better to use weakly supervised strategies to train FER models with original ambiguous annotation.
Problem

Research questions and friction points this paper is trying to address.

Address ambiguous facial expression annotations from subjective crowdsourcing
Handle inherent inter-class similarity in facial expression recognition
Propose weakly supervised training over precise one-hot label conversion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using weakly supervised learning strategies
Training models with ambiguous annotations
Avoiding conversion to one-hot labels
🔎 Similar Papers
No similar papers found.