Micro-AU CLIP: Fine-Grained Contrastive Learning from Local Independence to Global Dependency for Micro-Expression Action Unit Detection

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for micro-expression action unit (Micro-AU) detection struggle to simultaneously capture the local independence and global dependencies among AUs, resulting in insufficient regional awareness. To address this limitation, this work proposes the Micro-AU CLIP framework, which introduces a novel “local independence–global dependency” modeling paradigm. Specifically, Patch Token Attention enables fine-grained, locally independent semantic modeling, while Global Dependency Attention combined with a dedicated Global Dependency Loss captures holistic semantic relationships across AUs. Furthermore, a Micro-AU Contrastive Loss (MiAUCL) is designed to achieve fine-grained vision–language alignment. Notably, the proposed method effectively identifies micro-expressions without requiring emotion labels and achieves state-of-the-art performance on Micro-AU detection, overcoming key limitations of CLIP in micro-semantic alignment.

Technology Category

Application Category

📝 Abstract
Micro-expression (ME) action units (Micro-AUs) provide objective clues for fine-grained genuine emotion analysis. Most existing Micro-AU detection methods learn AU features from the whole facial image/video, which conflicts with the inherent locality of AU, resulting in insufficient perception of AU regions. In fact, each AU independently corresponds to specific localized facial muscle movements (local independence), while there is an inherent dependency between some AUs under specific emotional states (global dependency). Thus, this paper explores the effectiveness of the independence-to-dependency pattern and proposes a novel micro-AU detection framework, micro-AU CLIP, that uniquely decomposes the AU detection process into local semantic independence modeling (LSI) and global semantic dependency (GSD) modeling. In LSI, Patch Token Attention (PTA) is designed, mapping several local features within the AU region to the same feature space; In GSD, Global Dependency Attention (GDA) and Global Dependency Loss (GDLoss) are presented to model the global dependency relationships between different AUs, thereby enhancing each AU feature. Furthermore, considering CLIP's native limitations in micro-semantic alignment, a microAU contrastive loss (MiAUCL) is designed to learn AU features by a fine-grained alignment of visual and text features. Also, Micro-AU CLIP is effectively applied to ME recognition in an emotion-label-free way. The experimental results demonstrate that Micro-AU CLIP can fully learn fine-grained micro-AU features, achieving state-of-the-art performance.
Problem

Research questions and friction points this paper is trying to address.

Micro-expression
Action Unit Detection
Local Independence
Global Dependency
Fine-Grained Emotion Analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Micro-AU CLIP
Local Semantic Independence
Global Semantic Dependency
Fine-Grained Contrastive Learning
Patch Token Attention
J
Jinsheng Wei
Nanjing University of Posts and Telecommunications
F
Fengzhou Guo
Nanjing University of Posts and Telecommunications
Yante Li
Yante Li
University of Oulu
Computer VisionAffective ComputingDeep learning
H
Haoyu Chen
Nanjing University of Posts and Telecommunications
G
Guanming Lu
Nanjing University of Posts and Telecommunications
Guoying Zhao
Guoying Zhao
Academy Professor, IEEE Fellow, Professor of Computer Science and Engineering, University of Oulu
Affective ComputingArtificial IntelligenceComputer VisionPattern Recognition