🤖 AI Summary
Existing methods for micro-expression action unit (Micro-AU) detection struggle to simultaneously capture the local independence and global dependencies among AUs, resulting in insufficient regional awareness. To address this limitation, this work proposes the Micro-AU CLIP framework, which introduces a novel “local independence–global dependency” modeling paradigm. Specifically, Patch Token Attention enables fine-grained, locally independent semantic modeling, while Global Dependency Attention combined with a dedicated Global Dependency Loss captures holistic semantic relationships across AUs. Furthermore, a Micro-AU Contrastive Loss (MiAUCL) is designed to achieve fine-grained vision–language alignment. Notably, the proposed method effectively identifies micro-expressions without requiring emotion labels and achieves state-of-the-art performance on Micro-AU detection, overcoming key limitations of CLIP in micro-semantic alignment.
📝 Abstract
Micro-expression (ME) action units (Micro-AUs) provide objective clues for fine-grained genuine emotion analysis. Most existing Micro-AU detection methods learn AU features from the whole facial image/video, which conflicts with the inherent locality of AU, resulting in insufficient perception of AU regions. In fact, each AU independently corresponds to specific localized facial muscle movements (local independence), while there is an inherent dependency between some AUs under specific emotional states (global dependency). Thus, this paper explores the effectiveness of the independence-to-dependency pattern and proposes a novel micro-AU detection framework, micro-AU CLIP, that uniquely decomposes the AU detection process into local semantic independence modeling (LSI) and global semantic dependency (GSD) modeling. In LSI, Patch Token Attention (PTA) is designed, mapping several local features within the AU region to the same feature space; In GSD, Global Dependency Attention (GDA) and Global Dependency Loss (GDLoss) are presented to model the global dependency relationships between different AUs, thereby enhancing each AU feature. Furthermore, considering CLIP's native limitations in micro-semantic alignment, a microAU contrastive loss (MiAUCL) is designed to learn AU features by a fine-grained alignment of visual and text features. Also, Micro-AU CLIP is effectively applied to ME recognition in an emotion-label-free way. The experimental results demonstrate that Micro-AU CLIP can fully learn fine-grained micro-AU features, achieving state-of-the-art performance.