FAIRWELL: Fair Multimodal Self-Supervised Learning for Wellbeing Prediction

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses fairness disparities in multimodal self-supervised learning (SSL) for health prediction, where sensitive attributes (e.g., gender, race) induce biased representations. We propose FAIRWELL, a novel loss function that jointly optimizes variance, invariance, and covariance regularization within the VICReg framework to learn subject-invariant, robust representations. FAIRWELL explicitly decouples representation dependence on protected attributes, enhancing fairness without substantially compromising downstream classification performance. Experiments on three real-world clinical datasets—D-Vlog, MIMIC, and MODMA—demonstrate that FAIRWELL improves fairness metrics (e.g., Equal Opportunity Difference) by 23.6% on average over baselines, while incurring at most a 0.8% drop in classification accuracy. This advances the Pareto frontier between fairness and predictive performance. To our knowledge, FAIRWELL is the first end-to-end fair representation learning framework tailored for multimodal SSL in health prediction.

Technology Category

Application Category

📝 Abstract
Early efforts on leveraging self-supervised learning (SSL) to improve machine learning (ML) fairness has proven promising. However, such an approach has yet to be explored within a multimodal context. Prior work has shown that, within a multimodal setting, different modalities contain modality-unique information that can complement information of other modalities. Leveraging on this, we propose a novel subject-level loss function to learn fairer representations via the following three mechanisms, adapting the variance-invariance-covariance regularization (VICReg) method: (i) the variance term, which reduces reliance on the protected attribute as a trivial solution; (ii) the invariance term, which ensures consistent predictions for similar individuals; and (iii) the covariance term, which minimizes correlational dependence on the protected attribute. Consequently, our loss function, coined as FAIRWELL, aims to obtain subject-independent representations, enforcing fairness in multimodal prediction tasks. We evaluate our method on three challenging real-world heterogeneous healthcare datasets (i.e. D-Vlog, MIMIC and MODMA) which contain different modalities of varying length and different prediction tasks. Our findings indicate that our framework improves overall fairness performance with minimal reduction in classification performance and significantly improves on the performance-fairness Pareto frontier.
Problem

Research questions and friction points this paper is trying to address.

Improving fairness in multimodal self-supervised learning
Reducing protected attribute dependence in representations
Enhancing fairness-performance tradeoff in healthcare predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variance-invariance-covariance regularization for fairness
Subject-level loss function for multimodal SSL
Minimizes correlational dependence on protected attributes
🔎 Similar Papers
No similar papers found.