🤖 AI Summary
This work addresses fairness degradation in federated learning caused by group-specific distributed concept drift—where concept drift occurs independently across client groups with distinct sensitive attributes, yet the global model maintains a shared hypothesis, leading to stable overall accuracy but increasingly disparate predictions across groups. We formally define this problem for the first time and propose the first fairness-aware multi-model collaborative adaptation framework. Our method comprises: (i) local group-wise drift detection, (ii) dynamic temporal model clustering, (iii) adaptive federated weighted aggregation, and (iv) fairness-constrained optimization. Extensive experiments on multiple real-world datasets demonstrate that our approach reduces Equalized Odds disparity by an average of 62% over baseline methods while maintaining classification accuracy above 98%, thereby significantly alleviating the fairness–accuracy trade-off.
📝 Abstract
In the evolving field of machine learning, ensuring group fairness has become a critical concern, prompting the development of algorithms designed to mitigate bias in decision-making processes. Group fairness refers to the principle that a model's decisions should be equitable across different groups defined by sensitive attributes such as gender or race, ensuring that individuals from privileged groups and unprivileged groups are treated fairly and receive similar outcomes. However, achieving fairness in the presence of group-specific concept drift remains an unexplored frontier, and our research represents pioneering efforts in this regard. Group-specific concept drift refers to situations where one group experiences concept drift over time, while another does not, leading to a decrease in fairness even if accuracy (ACC) remains fairly stable. Within the framework of federated learning (FL), where clients collaboratively train models, its distributed nature further amplifies these challenges since each client can experience group-specific concept drift independently while still sharing the same underlying concept, creating a complex and dynamic environment for maintaining fairness. The most significant contribution of our research is the formalization and introduction of the problem of group-specific concept drift and its distributed counterpart, shedding light on its critical importance in the field of fairness. In addition, leveraging insights from prior research, we adapt an existing distributed concept drift adaptation algorithm to tackle group-specific distributed concept drift, which uses a multimodel approach, a local group-specific drift detection mechanism, and continuous clustering of models over time. The findings from our experiments highlight the importance of addressing group-specific concept drift and its distributed counterpart to advance fairness in machine learning.