🤖 AI Summary
This work investigates the visual cues underlying foundation models’ (FMs) facial emotion recognition (FER) and their psychological validity, focusing on proxy bias-induced shortcut learning and fairness risks. Using a teeth-annotated AffectNet subset, we conduct zero-shot FER and structured attribution analysis on multi-scale vision-language models (VLMs). We systematically reveal that tooth visibility serves as a strong proxy cue that substantially biases model predictions. Although models like GPT-4o exhibit internally consistent valence-arousal response patterns, they rely heavily on superficial facial attributes—such as eyebrow position—rather than deeper psychological representations. These findings expose latent bias mechanisms in current FMs when deployed in sensitive domains (e.g., mental health, education), challenging their reliability and equity. Our study provides critical empirical evidence for interpretable FER and fair AI design, highlighting the need to mitigate spurious correlations in multimodal affective modeling.
📝 Abstract
Foundation Models (FMs) are rapidly transforming Affective Computing (AC), with Vision Language Models (VLMs) now capable of recognising emotions in zero shot settings. This paper probes a critical but underexplored question: what visual cues do these models rely on to infer affect, and are these cues psychologically grounded or superficially learnt? We benchmark varying scale VLMs on a teeth annotated subset of AffectNet dataset and find consistent performance shifts depending on the presence of visible teeth. Through structured introspection of, the best-performing model, i.e., GPT-4o, we show that facial attributes like eyebrow position drive much of its affective reasoning, revealing a high degree of internal consistency in its valence-arousal predictions. These patterns highlight the emergent nature of FMs behaviour, but also reveal risks: shortcut learning, bias, and fairness issues especially in sensitive domains like mental health and education.