GFLC: Graph-based Fairness-aware Label Correction for Fair Classification

📅 2025-06-18
📈 Citations: 0
✹ Influential: 0
📄 PDF
🀖 AI Summary
To address fairness evaluation distortion caused by biased and noisy training labels, this paper proposes a graph-structure-driven label correction method that simultaneously mitigates label noise and enforces demographic parity. The method innovatively integrates Ricci-flow-optimized graph Laplacian regularization with explicit fairness constraints, enabling joint optimization of noise robustness and fairness within a confidence-aware modeling framework. Experimental results across multiple benchmark datasets demonstrate substantial improvements in the joint performance of classification accuracy and fairness: the difference in demographic parity (ΔDP) is reduced by 37.2%, outperforming state-of-the-art debiasing and robust learning baselines.

Technology Category

Application Category

📝 Abstract
Fairness in machine learning (ML) has a critical importance for building trustworthy machine learning system as artificial intelligence (AI) systems increasingly impact various aspects of society, including healthcare decisions and legal judgments. Moreover, numerous studies demonstrate evidence of unfair outcomes in ML and the need for more robust fairness-aware methods. However, the data we use to train and develop debiasing techniques often contains biased and noisy labels. As a result, the label bias in the training data affects model performance and misrepresents the fairness of classifiers during testing. To tackle this problem, our paper presents Graph-based Fairness-aware Label Correction (GFLC), an efficient method for correcting label noise while preserving demographic parity in datasets. In particular, our approach combines three key components: prediction confidence measure, graph-based regularization through Ricci-flow-optimized graph Laplacians, and explicit demographic parity incentives. Our experimental findings show the effectiveness of our proposed approach and show significant improvements in the trade-off between performance and fairness metrics compared to the baseline.
Problem

Research questions and friction points this paper is trying to address.

Addressing biased and noisy labels in ML training data
Ensuring demographic parity in fair classification tasks
Improving trade-off between model performance and fairness metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based label correction for fairness
Ricci-flow-optimized graph Laplacians regularization
Demographic parity incentives in noise correction
🔎 Similar Papers
No similar papers found.
M
Modar Sulaiman
University of Tartu, Institute of Computer Science, Tartu, Estonia
Kallol Roy
Kallol Roy
Assistant Professor, University of Tartu, Estonia
#MachineLearning #Sheaves #AlgebraicTopology #unitartucs