🤖 AI Summary
This work addresses the challenges of representation bias and collaboration bias in federated learning, which arise from data heterogeneity and uneven client participation, leading to unfair model performance and degraded generalization. To jointly model representation fairness and collaboration fairness, the authors propose an alignment-driven embedding regularization mechanism coupled with a dynamic reward-penalty aggregation strategy. The approach enhances semantic consistency between local and global embeddings through alignment-based regularization and dynamically adjusts client aggregation weights based on their participation history and alignment degree. Extensive experiments across diverse models and datasets demonstrate that the proposed method significantly outperforms existing approaches, simultaneously improving overall model accuracy and cross-client fairness.
📝 Abstract
With the proliferation of distributed data sources, Federated Learning (FL) has emerged as a key approach to enable collaborative intelligence through decentralized model training while preserving data privacy. However, conventional FL algorithms often suffer from performance disparities across clients caused by heterogeneous data distributions and unequal participation, which leads to unfair outcomes. Specifically, we focus on two core fairness challenges, i.e., representation bias, arising from misaligned client representations, and collaborative bias, stemming from inequitable contribution during aggregation, both of which degrade model performance and generalizability. To mitigate these disparities, we propose CoRe-Fed, a unified optimization framework that bridges collaborative and representation fairness via embedding-level regularization and fairness-aware aggregation. Initially, an alignment-driven mechanism promotes semantic consistency between local and global embeddings to reduce representational divergence. Subsequently, a dynamic reward-penalty-based aggregation strategy adjusts each client's weight based on participation history and embedding alignment to ensure contribution-aware aggregation. Extensive experiments across diverse models and datasets demonstrate that CoRe-Fed improves both fairness and model performance over the state-of-the-art baseline algorithms.