Enhancing Class Fairness in Classification with A Two-Player Game Approach

📅 2024-05-31
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
While data augmentation improves overall classification accuracy, it often exacerbates inter-class performance imbalance, undermining model fairness. This work identifies— for the first time—that such unfairness stems not merely from augmented data but from inherent optimization bias in standard classifier training. Method: We propose a fairness-aware classification framework explicitly optimizing for class-level performance parity, formulated as an adversarial two-player game: one player updates the classifier to minimize weighted loss, while the other dynamically adjusts class-wise weights to amplify errors on underperforming classes. We further design a multiplicative weight update algorithm with theoretical convergence guarantees. Results: Extensive experiments across five benchmark datasets demonstrate that our method significantly improves class-wise accuracy balance—reducing the Fairness Gap by +12.7%—while preserving overall accuracy nearly unchanged.

Technology Category

Application Category

📝 Abstract
Data augmentation is widely applied and has shown its benefits in different machine learning tasks. However, as recently observed in some downstream tasks, data augmentation may introduce an unfair impact on classifications. While it can improve the performance of some classes, it can actually be detrimental for other classes, which can be problematic in some application domains. In this paper, to counteract this phenomenon, we propose a FAir Classification approach with a Two-player game (FACT). We first formulate the training of a classifier with data augmentation as a fair optimization problem, which can be further written as an adversarial two-player game. Following this formulation, we propose a novel multiplicative weight optimization algorithm, for which we theoretically prove that it can converge to a solution that is fair over classes. Interestingly, our formulation also reveals that this fairness issue over classes is not due to data augmentation only, but is in fact a general phenomenon. Our empirical experiments demonstrate that the performance of our learned classifiers is indeed more fairly distributed over classes in five datasets, with only limited impact on the average accuracy.
Problem

Research questions and friction points this paper is trying to address.

Addresses unfair class effects in data augmentation
Proposes CLAM to balance class performance
Reveals class-dependent effects as general phenomenon
Innovation

Methods, ideas, or system contributions that make the work stand out.

CLAss-dependent Multiplicative-weights method (CLAM)
Adversarial two-player game formulation
Balances individual class performances effectively
🔎 Similar Papers
No similar papers found.
Y
Yunpeng Jiang
Shanghai Jiao Tong University
Paul Weng
Paul Weng
Duke Kunshan University
Artificial IntelligenceReinforcement Learning/Markov Decision ProcessQualitative/Ordinal Models
Y
Yutong Ban
Shanghai Jiao Tong University