🤖 AI Summary
Visual emotion recognition (VER) suffers from the “affective gap”: pretrained vision models extract fact-level features that lack direct alignment with abstract emotional semantics. To bridge this gap, we propose a partition-adaptive contrastive learning framework that transfers affective knowledge from pretrained language models via noisy image-text pairs, enabling cross-modal affective representation alignment. Our method dynamically classifies sample types and constructs semantic-aware positive/negative pairs, thereby uncovering latent emotion-fact associations within noisy data—without requiring high-quality annotations. By integrating multimodal contrastive learning with noise-robust modeling, our approach significantly improves emotion classification accuracy on standard benchmarks (e.g., +3.2% average accuracy on FER-2013 and AffectNet) across mainstream vision backbones including ViT and ResNet. The code is publicly available, establishing a scalable, low-supervision knowledge transfer paradigm for VER.
📝 Abstract
Visual emotion recognition (VER) is a longstanding field that has garnered increasing attention with the advancement of deep neural networks. Although recent studies have achieved notable improvements by leveraging the knowledge embedded within pre-trained visual models, the lack of direct association between factual-level features and emotional categories, called the "affective gap", limits the applicability of pre-training knowledge for VER tasks. On the contrary, the explicit emotional expression and high information density in textual modality eliminate the "affective gap". Therefore, we propose borrowing the knowledge from the pre-trained textual model to enhance the emotional perception of pre-trained visual models. We focus on the factual and emotional connections between images and texts in noisy social media data, and propose Partitioned Adaptive Contrastive Learning (PACL) to leverage these connections. Specifically, we manage to separate different types of samples and devise distinct contrastive learning strategies for each type. By dynamically constructing negative and positive pairs, we fully exploit the potential of noisy samples. Through comprehensive experiments, we demonstrate that bridging the "affective gap" significantly improves the performance of various pre-trained visual models in downstream emotion-related tasks. Our code is released on https://github.com/wdqqdw/PACL.