FedRE: A Representation Entanglement Framework for Model-Heterogeneous Federated Learning

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) faces concurrent challenges of client model architecture heterogeneity, privacy leakage risks (e.g., representation inversion attacks), and high communication overhead. Method: This paper proposes FedRE, a novel FL framework that (i) generates entangled representations via normalized random weighting to align heterogeneous local feature spaces without architectural alignment; (ii) incorporates label encoding and per-round resampling to mitigate global classifier overconfidence, enhancing generalization and resilience against inversion attacks; and (iii) jointly optimizes local feature aggregation and global supervised training. Contribution/Results: FedRE achieves state-of-the-art efficiency, robustness, and privacy preservation in heterogeneous FL. Experiments on CIFAR-10/100 show ~42% lower communication cost than FedAvg, provable resistance to representation inversion attacks, and accuracy on par with homogeneous FL baselines—marking the first framework to simultaneously achieve all three properties in heterogeneous settings.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative training across clients without compromising privacy. While most existing FL methods assume homogeneous model architectures, client heterogeneity in data and resources renders this assumption impractical, motivating model-heterogeneous FL. To address this problem, we propose Federated Representation Entanglement (FedRE), a framework built upon a novel form of client knowledge termed entangled representation. In FedRE, each client aggregates its local representations into a single entangled representation using normalized random weights and applies the same weights to integrate the corresponding one-hot label encodings into the entangled-label encoding. Those are then uploaded to the server to train a global classifier. During training, each entangled representation is supervised across categories via its entangled-label encoding, while random weights are resampled each round to introduce diversity, mitigating the global classifier's overconfidence and promoting smoother decision boundaries. Furthermore, each client uploads a single cross-category entangled representation along with its entangled-label encoding, mitigating the risk of representation inversion attacks and reducing communication overhead. Extensive experiments demonstrate that FedRE achieves an effective trade-off among model performance, privacy protection, and communication overhead. The codes are available at https://github.com/AIResearch-Group/FedRE.
Problem

Research questions and friction points this paper is trying to address.

Addresses model heterogeneity in federated learning
Mitigates privacy risks from representation inversion attacks
Reduces communication overhead in distributed training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entangled representation with random normalized weights
Single cross-category upload reduces communication and privacy risks
Resampled random weights each round prevent overconfidence
🔎 Similar Papers
No similar papers found.
Y
Yuan Yao
Teleinfo, CAICT
Lixu Wang
Lixu Wang
Northwestern University
Machine LearningData Privacy
J
Jiaqi Wu
Tsinghua University
Jin Song
Jin Song
Academy of Mathematics and Systems Science, Chinese Academy of Sciences
Applied MathematicsDeep learningNonlinear Waves
Simin Chen
Simin Chen
Columbia University
Software EngineeringMachine Learning
Zehua Wang
Zehua Wang
Prof. of Blockchain at UBC
blockchain systemscybersecuritymechanism designcommunication systems
Z
Zijian Tian
China University of Mining and Technology-Beijing
W
Wei Chen
China University of Mining and Technology
H
Huixia Li
Beijing Jiaotong University
X
Xiaoxiao Li
University of British Columbia