Fairness without Demographics through Learning Graph of Gradients

πŸ“… 2024-12-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Achieving algorithmic fairness without access to sensitive attributes (e.g., gender, race) remains a critical challenge, particularly under privacy constraints or data scarcity. Method: This paper proposes a robust group-aware learning framework that operates without demographic information. Its core innovation is the construction of a β€œgradient graph” β€” the first use of model gradient correlations β€” to replace conventional hard grouping based on features or labels. Leveraging graph neural networks and an adversarial weighting mechanism, the framework performs soft clustering and adaptive subgroup reweighting. Contribution/Results: The method significantly improves robustness to label noise, unknown subgroups, and underrepresented populations. On multiple benchmark datasets, it reduces the equal opportunity difference by over 40% on average while preserving near-original accuracy. This establishes a novel paradigm for fair machine learning in privacy-sensitive and data-sparse settings.

Technology Category

Application Category

πŸ“ Abstract
Machine learning systems are notoriously prone to biased predictions about certain demographic groups, leading to algorithmic fairness issues. Due to privacy concerns and data quality problems, some demographic information may not be available in the training data and the complex interaction of different demographics can lead to a lot of unknown minority subpopulations, which all limit the applicability of group fairness. Many existing works on fairness without demographics assume the correlation between groups and features. However, we argue that the model gradients are also valuable for fairness without demographics. In this paper, we show that the correlation between gradients and groups can help identify and improve group fairness. With an adversarial weighting architecture, we construct a graph where samples with similar gradients are connected and learn the weights of different samples from it. Unlike the surrogate grouping methods that cluster groups from features and labels as proxy sensitive attribute, our method leverages the graph structure as a soft grouping mechanism, which is much more robust to noises. The results show that our method is robust to noise and can improve fairness significantly without decreasing the overall accuracy too much.
Problem

Research questions and friction points this paper is trying to address.

Algorithmic fairness
Machine learning bias
Minority groups representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient Similarity
Graph Learning
Group Fairness Enhancement
πŸ”Ž Similar Papers
No similar papers found.
Yingtao Luo
Yingtao Luo
PhD Candidate, Carnegie Mellon University
Machine LearningMedical Decision MakingFoundation ModelAI for Science
Zhixun Li
Zhixun Li
Chinese University of Hong Kong
Large Language ModelsAI for ScienceData Mining
Q
Qiang Liu
CRIPAC, MAIS, Institute of Automation, Chinese Academy of Sciences, Beijing, China
J
Jun Zhu
Department of Computer Science and Technology, Institute for AI, THBI Lab, BNRist Center, Tsinghua-Bosch Joint ML Center, Tsinghua University, Beijing, China