🤖 AI Summary
In source-free domain adaptation (SFDA) for facial expression recognition (FER), models struggle to generalize when the target domain contains only unlabeled neutral expressions, lacking expressive variations. Method: We propose a personalized feature translation approach that directly transfers style features in the latent space—without image generation or access to source-domain data. Leveraging a pre-trained feature translator trained on cross-subject style transfer in the source domain, we introduce expression-consistency and style-aware losses to efficiently adapt latent representations using only target-domain neutral data. Contribution/Results: Our method avoids unstable synthesis of non-neutral images and costly full-model fine-tuning, significantly improving computational efficiency and privacy preservation. It achieves state-of-the-art performance across multiple FER benchmarks under SFDA settings and is particularly suitable for resource-constrained real-world applications.
📝 Abstract
Facial expression recognition (FER) models are employed in many video-based affective computing applications, such as human-computer interaction and healthcare monitoring. However, deep FER models often struggle with subtle expressions and high inter-subject variability, limiting their performance in real-world applications. To improve their performance, source-free domain adaptation (SFDA) methods have been proposed to personalize a pretrained source model using only unlabeled target domain data, thereby avoiding data privacy, storage, and transmission constraints. This paper addresses a challenging scenario where source data is unavailable for adaptation, and only unlabeled target data consisting solely of neutral expressions is available. SFDA methods are not typically designed to adapt using target data from only a single class. Further, using models to generate facial images with non-neutral expressions can be unstable and computationally intensive. In this paper, personalized feature translation (PFT) is proposed for SFDA. Unlike current image translation methods for SFDA, our lightweight method operates in the latent space. We first pre-train the translator on the source domain data to transform the subject-specific style features from one source subject into another. Expression information is preserved by optimizing a combination of expression consistency and style-aware objectives. Then, the translator is adapted on neutral target data, without using source data or image synthesis. By translating in the latent space, PFT avoids the complexity and noise of face expression generation, producing discriminative embeddings optimized for classification. Using PFT eliminates the need for image synthesis, reduces computational overhead (using a lightweight translator), and only adapts part of the model, making the method efficient compared to image-based translation.