π€ AI Summary
Source-free domain adaptation for visual emotion recognition (SFDA-VER) suffers from degraded transfer performance due to the intrinsic ambiguity of emotion labels and noise in pseudo-labels. Method: This paper proposes, for the first time, a fuzziness-aware loss (FAL), grounded in an enhanced cross-entropy formulation that explicitly models uncertainty in both ground-truth and pseudo-labels; we theoretically prove FALβs robustness to pseudo-label noise. Crucially, FAL suppresses losses from non-predicted classes without requiring auxiliary network modules. Results: Evaluated on 26 cross-domain subtasks across three benchmark datasets, our method consistently outperforms existing SFDA approaches. It demonstrates strong generalization and practical utility in privacy-sensitive scenarios where source data are inaccessible.
π Abstract
Source-free domain adaptation in visual emotion recognition (SFDA-VER) is a highly challenging task that requires adapting VER models to the target domain without relying on source data, which is of great significance for data privacy protection. However, due to the unignorable disparities between visual emotion data and traditional image classification data, existing SFDA methods perform poorly on this task. In this paper, we investigate the SFDA-VER task from a fuzzy perspective and identify two key issues: fuzzy emotion labels and fuzzy pseudo-labels. These issues arise from the inherent uncertainty of emotion annotations and the potential mispredictions in pseudo-labels. To address these issues, we propose a novel fuzzy-aware loss (FAL) to enable the VER model to better learn and adapt to new domains under fuzzy labels. Specifically, FAL modifies the standard cross entropy loss and focuses on adjusting the losses of non-predicted categories, which prevents a large number of uncertain or incorrect predictions from overwhelming the VER model during adaptation. In addition, we provide a theoretical analysis of FAL and prove its robustness in handling the noise in generated pseudo-labels. Extensive experiments on 26 domain adaptation sub-tasks across three benchmark datasets demonstrate the effectiveness of our method.