Bridging Visual Affective Gap: Borrowing Textual Knowledge by Learning from Noisy Image-Text Pairs

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual emotion recognition (VER) suffers from the “affective gap”: pretrained vision models extract fact-level features that lack direct alignment with abstract emotional semantics. To bridge this gap, we propose a partition-adaptive contrastive learning framework that transfers affective knowledge from pretrained language models via noisy image-text pairs, enabling cross-modal affective representation alignment. Our method dynamically classifies sample types and constructs semantic-aware positive/negative pairs, thereby uncovering latent emotion-fact associations within noisy data—without requiring high-quality annotations. By integrating multimodal contrastive learning with noise-robust modeling, our approach significantly improves emotion classification accuracy on standard benchmarks (e.g., +3.2% average accuracy on FER-2013 and AffectNet) across mainstream vision backbones including ViT and ResNet. The code is publicly available, establishing a scalable, low-supervision knowledge transfer paradigm for VER.

Technology Category

Application Category

📝 Abstract
Visual emotion recognition (VER) is a longstanding field that has garnered increasing attention with the advancement of deep neural networks. Although recent studies have achieved notable improvements by leveraging the knowledge embedded within pre-trained visual models, the lack of direct association between factual-level features and emotional categories, called the "affective gap", limits the applicability of pre-training knowledge for VER tasks. On the contrary, the explicit emotional expression and high information density in textual modality eliminate the "affective gap". Therefore, we propose borrowing the knowledge from the pre-trained textual model to enhance the emotional perception of pre-trained visual models. We focus on the factual and emotional connections between images and texts in noisy social media data, and propose Partitioned Adaptive Contrastive Learning (PACL) to leverage these connections. Specifically, we manage to separate different types of samples and devise distinct contrastive learning strategies for each type. By dynamically constructing negative and positive pairs, we fully exploit the potential of noisy samples. Through comprehensive experiments, we demonstrate that bridging the "affective gap" significantly improves the performance of various pre-trained visual models in downstream emotion-related tasks. Our code is released on https://github.com/wdqqdw/PACL.
Problem

Research questions and friction points this paper is trying to address.

Bridging the affective gap between visual features and emotional categories
Leveraging textual knowledge to enhance visual emotion recognition
Handling noisy image-text pairs from social media data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Borrowing knowledge from pre-trained textual models
Using Partitioned Adaptive Contrastive Learning method
Dynamically constructing negative and positive pairs
🔎 Similar Papers