FairTune: A Bias-Aware Fine-Tuning Framework Towards Fair Heart Rate Prediction from PPG

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals that while fine-tuning pre-trained models (e.g., PPG-GPT) for heart rate prediction from photoplethysmography (PPG) signals significantly improves accuracy—reducing mean absolute error by 80%—it concurrently exacerbates gender-based prediction disparities, and fairness does not inherently improve with accuracy. To address this, we propose FairTune, the first fairness-aware fine-tuning framework specifically designed for physiological foundation models under cross-domain adaptation. FairTune integrates inverse-frequency weighting, group-wise distributionally robust optimization, and adversarial debiasing. Evaluated across multi-source datasets—including ICU monitors, wearable devices, and smartphones—FairTune achieves substantial fairness gains without sacrificing accuracy: it markedly narrows gender performance gaps, and representational analysis confirms its effectiveness in mitigating demographic clustering bias.

Technology Category

Application Category

📝 Abstract
Foundation models pretrained on physiological data such as photoplethysmography (PPG) signals are increasingly used to improve heart rate (HR) prediction across diverse settings. Fine-tuning these models for local deployment is often seen as a practical and scalable strategy. However, its impact on demographic fairness particularly under domain shifts remains underexplored. We fine-tune PPG-GPT a transformer-based foundation model pretrained on intensive care unit (ICU) data across three heterogeneous datasets (ICU, wearable, smartphone) and systematically evaluate the effects on HR prediction accuracy and gender fairness. While fine-tuning substantially reduces mean absolute error (up to 80%), it can simultaneously widen fairness gaps, especially in larger models and under significant distributional characteristics shifts. To address this, we introduce FairTune, a bias-aware fine-tuning framework in which we benchmark three mitigation strategies: class weighting based on inverse group frequency (IF), Group Distributionally Robust Optimization (GroupDRO), and adversarial debiasing (ADV). We find that IF and GroupDRO significantly reduce fairness gaps without compromising accuracy, with effectiveness varying by deployment domain. Representation analyses further reveal that mitigation techniques reshape internal embeddings to reduce demographic clustering. Our findings highlight that fairness does not emerge as a natural byproduct of fine-tuning and that explicit mitigation is essential for equitable deployment of physiological foundation models.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning PPG foundation models worsens demographic fairness gaps
Gender bias increases during domain shifts across healthcare settings
Current methods lack explicit bias mitigation for heart rate prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bias-aware fine-tuning framework FairTune
Benchmarks three fairness mitigation strategies
Reshapes embeddings to reduce demographic clustering
🔎 Similar Papers
No similar papers found.
L
Lovely Yeswanth Panchumarthi
Computer Science, Emory University
S
Saurabh Kataria
School of Nursing, Emory University
Y
Yi Wu
Computer Science, University of Oklahoma
X
Xiao Hu
School of Nursing, Emory University
Alex Fedorov
Alex Fedorov
Emory University
Representation LearningMultimodal LearningSelf-SupervisionNeuroimaging
H
Hyunjung Gloria Kwak
School of Nursing, Emory University