Fairness for the People, by the People: Minority Collective Action

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address inherent biases against minority groups in deployed machine learning models, this paper proposes a user-side, decentralized fairness optimization framework: without requiring enterprise involvement or modifications to the training pipeline, minority-group users collectively intervene by strategically relabeling their own input instances. The approach is model-agnostic, compatible with black-box models, and minimally invasive—marking the first effort to empower end users with agency over fairness improvement. We design three approximately optimal relabeling algorithms and evaluate them across multiple real-world datasets. Results show that adjusting labels for only a small fraction of samples significantly reduces inter-group predictive unfairness—e.g., decreasing equal opportunity difference by up to 40%—while preserving overall predictive accuracy, with classification error increasing by less than 0.5%.

Technology Category

Application Category

📝 Abstract
Machine learning models often preserve biases present in training data, leading to unfair treatment of certain minority groups. Despite an array of existing firm-side bias mitigation techniques, they typically incur utility costs and require organizational buy-in. Recognizing that many models rely on user-contributed data, end-users can induce fairness through the framework of Algorithmic Collective Action, where a coordinated minority group strategically relabels its own data to enhance fairness, without altering the firm's training process. We propose three practical, model-agnostic methods to approximate ideal relabeling and validate them on real-world datasets. Our findings show that a subgroup of the minority can substantially reduce unfairness with a small impact on the overall prediction error.
Problem

Research questions and friction points this paper is trying to address.

Mitigating unfair treatment of minority groups in machine learning models
Enabling end-users to induce fairness through strategic data relabeling
Reducing unfairness with minimal impact on overall prediction error
Innovation

Methods, ideas, or system contributions that make the work stand out.

User-contributed data relabeling for fairness
Model-agnostic methods for bias mitigation
Minority collective action without process alteration
🔎 Similar Papers
No similar papers found.