Achieving Fairness Without Harm via Selective Demographic Experts

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-stakes domains such as healthcare, achieving both fairness and predictive accuracy remains challenging. This paper proposes a “Selective Population Expert Mechanism” that learns group-specific representations and trains independent, personalized classifiers for each subgroup. Crucially, it dynamically activates the optimal expert per input under a harmlessness constraint—ensuring no subgroup’s performance degrades—thereby avoiding the accuracy trade-offs inherent in conventional debiasing methods. The framework is trained end-to-end via multi-task optimization, preserving original subgroup accuracy without compromise. Experiments on three real-world medical datasets (ocular disease, skin cancer, and X-ray diagnosis) and two facial attribute datasets demonstrate substantial fairness improvements—e.g., reductions of 37–62% in equal opportunity difference—while strictly maintaining baseline accuracy across all subgroups. To our knowledge, this is the first method to achieve simultaneous fairness enhancement and zero-accuracy-loss—a principled breakthrough in fair machine learning.

Technology Category

Application Category

📝 Abstract
As machine learning systems become increasingly integrated into human-centered domains such as healthcare, ensuring fairness while maintaining high predictive performance is critical. Existing bias mitigation techniques often impose a trade-off between fairness and accuracy, inadvertently degrading performance for certain demographic groups. In high-stakes domains like clinical diagnosis, such trade-offs are ethically and practically unacceptable. In this study, we propose a fairness-without-harm approach by learning distinct representations for different demographic groups and selectively applying demographic experts consisting of group-specific representations and personalized classifiers through a no-harm constrained selection. We evaluate our approach on three real-world medical datasets -- covering eye disease, skin cancer, and X-ray diagnosis -- as well as two face datasets. Extensive empirical results demonstrate the effectiveness of our approach in achieving fairness without harm.
Problem

Research questions and friction points this paper is trying to address.

Ensuring fairness without compromising predictive performance in machine learning
Addressing trade-offs between fairness and accuracy in bias mitigation
Applying demographic-specific experts to maintain performance across groups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning distinct representations for demographic groups
Selectively applying demographic experts with classifiers
Using no-harm constrained selection for fairness
🔎 Similar Papers
No similar papers found.
X
Xuwei Tan
Department of Computer Science and Engineering, The Ohio State University, USA
Y
Yuanlong Wang
Department of Computer Science and Engineering, The Ohio State University, USA
Thai-Hoang Pham
Thai-Hoang Pham
Ohio State University
Trustworthy AINatural Language ProcessingMachine LearningBioinformaticsHealth Informatics
P
Ping Zhang
Department of Computer Science and Engineering, The Ohio State University, USA
Xueru Zhang
Xueru Zhang
Assistant Professor, Computer Science and Engineering, The Ohio State University
responsible machine learning