🤖 AI Summary
Addressing the challenge of balancing fairness and predictive performance in sensitive domains such as healthcare, this paper proposes a fairness-oriented sparse low-rank decomposition framework. Unlike conventional singular value decomposition (SVD) applications for model compression, our approach is the first to leverage SVD for fairness enhancement: by quantifying the differential contribution of entries in the unitary matrices to group-level bias, we design an interpretable sparsity-inducing pruning strategy that selectively removes bias-inducing components. The method integrates low-rank factorization, sensitive-attribute–aware bias evaluation across demographic groups, and systematic hyperparameter ablation analysis. Evaluated on multiple benchmark datasets, it achieves substantial fairness improvements—e.g., reducing Equalized Odds disparity by 32%–58%—while preserving or marginally improving prediction accuracy. This demonstrates both computational efficiency and practical applicability in resource-constrained settings.
📝 Abstract
As deep learning (DL) techniques become integral to various applications, ensuring model fairness while maintaining high performance has become increasingly critical, particularly in sensitive fields such as medical diagnosis. Although a variety of bias-mitigation methods have been proposed, many rely on computationally expensive debiasing strategies or suffer substantial drops in model accuracy, which limits their practicality in real-world, resource-constrained settings. To address this issue, we propose a fairness-oriented low rank factorization (LRF) framework that leverages singular value decomposition (SVD) to improve DL model fairness. Unlike traditional SVD, which is mainly used for model compression by decomposing and reducing weight matrices, our work shows that SVD can also serve as an effective tool for fairness enhancement. Specifically, we observed that elements in the unitary matrices obtained from SVD contribute unequally to model bias across groups defined by sensitive attributes. Motivated by this observation, we propose a method, named FairLRF, that selectively removes bias-inducing elements from unitary matrices to reduce group disparities, thus enhancing model fairness. Extensive experiments show that our method outperforms conventional LRF methods as well as state-of-the-art fairness-enhancing techniques. Additionally, an ablation study examines how major hyper-parameters may influence the performance of processed models. To the best of our knowledge, this is the first work utilizing SVD not primarily for compression but for fairness enhancement.