🤖 AI Summary
Statistical heterogeneity—particularly feature heterogeneity—among clients in federated learning impedes effective personalized modeling. Method: This paper proposes an adaptive latent-space constraint mechanism that, for the first time, integrates a theoretically grounded adaptive Maximum Mean Discrepancy (MMD) metric into the Ditto framework, imposing heterogeneity-aware dynamic distribution alignment constraints in the latent space. Contribution/Results: The method jointly optimizes global knowledge sharing and local personalized modeling, enabling generalizable adaptation across diverse tasks and levels of heterogeneity. Evaluated on multiple heterogeneous benchmarks, it achieves average accuracy gains of 3.2–5.7% over state-of-the-art personalized FL methods, demonstrating superior generalizability, robustness, and transferability of the proposed constraint mechanism.
📝 Abstract
Federated learning (FL) has become an effective and widely used approach to training deep learning models on decentralized datasets held by distinct clients. FL also strengthens both security and privacy protections for training data. Common challenges associated with statistical heterogeneity between distributed datasets have spurred significant interest in personalized FL (pFL) methods, where models combine aspects of global learning with local modeling specific to each client's unique characteristics. In this work, the efficacy of theoretically supported, adaptive MMD measures within the Ditto framework, a state-of-the-art technique in pFL, are investigated. The use of such measures significantly improves model performance across a variety of tasks, especially those with pronounced feature heterogeneity. While the Ditto algorithm is specifically considered, such measures are directly applicable to a number of other pFL settings, and the results motivate the use of constraints tailored to the various kinds of heterogeneity expected in FL systems.