🤖 AI Summary
This paper addresses the challenge of automatically inferring psychologically plausible causal relationships between facial action units (AUs) and expressions from observational data. We propose the first prior-free causal graph discovery framework for facial affect analysis. Methodologically, we design a two-level causal model: a population-level module learns theory-consistent shared causal structures grounded in psychological principles, while an individual-level module performs sample-adaptive causal graph inference. We further introduce a feature-level counterfactual intervention mechanism to explicitly eliminate spurious correlations. Notably, our framework discovers inhibitory causal relations among AUs—without requiring manually annotated causal priors or joint distribution labels. Extensive experiments on six benchmark datasets demonstrate significant improvements in both AU detection and expression recognition. These results validate the efficacy of causal discovery for interpretable facial behavior modeling and advance facial affect analysis toward causal intelligence.
📝 Abstract
Understanding human affect from facial behavior requires not only accurate recognition but also structured reasoning over the latent dependencies that drive muscle activations and their expressive outcomes. Although Action Units (AUs) have long served as the foundation of affective computing, existing approaches rarely address how to infer psychologically plausible causal relations between AUs and expressions directly from data. We propose CausalAffect, the first framework for causal graph discovery in facial affect analysis. CausalAffect models AU-AU and AU-Expression dependencies through a two-level polarity and direction aware causal hierarchy that integrates population-level regularities with sample-adaptive structures. A feature-level counterfactual intervention mechanism further enforces true causal effects while suppressing spurious correlations. Crucially, our approach requires neither jointly annotated datasets nor handcrafted causal priors, yet it recovers causal structures consistent with established psychological theories while revealing novel inhibitory and previously uncharacterized dependencies. Extensive experiments across six benchmarks demonstrate that CausalAffect advances the state of the art in both AU detection and expression recognition, establishing a principled connection between causal discovery and interpretable facial behavior. All trained models and source code will be released upon acceptance.