🤖 AI Summary
Federated learning (FL) faces concurrent challenges of client model architecture heterogeneity, privacy leakage risks (e.g., representation inversion attacks), and high communication overhead.
Method: This paper proposes FedRE, a novel FL framework that (i) generates entangled representations via normalized random weighting to align heterogeneous local feature spaces without architectural alignment; (ii) incorporates label encoding and per-round resampling to mitigate global classifier overconfidence, enhancing generalization and resilience against inversion attacks; and (iii) jointly optimizes local feature aggregation and global supervised training.
Contribution/Results: FedRE achieves state-of-the-art efficiency, robustness, and privacy preservation in heterogeneous FL. Experiments on CIFAR-10/100 show ~42% lower communication cost than FedAvg, provable resistance to representation inversion attacks, and accuracy on par with homogeneous FL baselines—marking the first framework to simultaneously achieve all three properties in heterogeneous settings.
📝 Abstract
Federated learning (FL) enables collaborative training across clients without compromising privacy. While most existing FL methods assume homogeneous model architectures, client heterogeneity in data and resources renders this assumption impractical, motivating model-heterogeneous FL. To address this problem, we propose Federated Representation Entanglement (FedRE), a framework built upon a novel form of client knowledge termed entangled representation. In FedRE, each client aggregates its local representations into a single entangled representation using normalized random weights and applies the same weights to integrate the corresponding one-hot label encodings into the entangled-label encoding. Those are then uploaded to the server to train a global classifier. During training, each entangled representation is supervised across categories via its entangled-label encoding, while random weights are resampled each round to introduce diversity, mitigating the global classifier's overconfidence and promoting smoother decision boundaries. Furthermore, each client uploads a single cross-category entangled representation along with its entangled-label encoding, mitigating the risk of representation inversion attacks and reducing communication overhead. Extensive experiments demonstrate that FedRE achieves an effective trade-off among model performance, privacy protection, and communication overhead. The codes are available at https://github.com/AIResearch-Group/FedRE.