🤖 AI Summary
Existing facial expression recognition methods predominantly rely on CNNs to extract static appearance features, neglecting dynamic relational modeling among facial regions. To address this, we propose the Appearance and Relation-aware Graph Fusion Network (ARGF-Net), a parallel graph attention-based architecture. First, frame-level appearance features are extracted using a pretrained CNN. Second, a facial region relation graph is constructed, and graph attention mechanisms are employed to jointly model spatial topology and temporal evolution. Third, a parallel fusion module enables complementary interaction between appearance and relation sequences, facilitating effective spatiotemporal dynamic modeling. Extensive experiments demonstrate that ARGF-Net achieves state-of-the-art or competitive performance on three benchmark datasets—RAF-DB, FER2013, and AffectNet—validating the efficacy of jointly modeling appearance representations and inter-region relational dynamics for enhanced spatiotemporal discriminability.
📝 Abstract
The key to facial expression recognition is to learn discriminative spatial-temporal representations that embed facial expression dynamics. Previous studies predominantly rely on pre-trained Convolutional Neural Networks (CNNs) to learn facial appearance representations, overlooking the relationships between facial regions. To address this issue, this paper presents an Appearance- and Relation-aware Parallel Graph attention fusion Network (ARPGNet) to learn mutually enhanced spatial-temporal representations of appearance and relation information. Specifically, we construct a facial region relation graph and leverage the graph attention mechanism to model the relationships between facial regions. The resulting relational representation sequences, along with CNN-based appearance representation sequences, are then fed into a parallel graph attention fusion module for mutual interaction and enhancement. This module simultaneously explores the complementarity between different representation sequences and the temporal dynamics within each sequence. Experimental results on three facial expression recognition datasets demonstrate that the proposed ARPGNet outperforms or is comparable to state-of-the-art methods.