FedFG: Privacy-Preserving and Robust Federated Learning via Flow-Matching Generation

📅 2026-03-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical challenges in traditional federated learning, where privacy leakage and vulnerability of the central server to gradient eavesdropping and poisoning attacks remain significant concerns. The authors propose FedFG, a novel framework that introduces flow-matching generative modeling into federated learning for the first time. In FedFG, each client decouples its model into a private feature extractor and a public classifier, replacing the extractor with a flow-matching generator when communicating with the server. The server then performs update validation and robust aggregation based solely on the generated samples. This approach simultaneously ensures strong privacy guarantees and effective defense against various poisoning attacks, achieving state-of-the-art accuracy and security on benchmark datasets including MNIST, FMNIST, and CIFAR-10.
📝 Abstract
Federated learning (FL) enables distributed clients to collaboratively train a global model using local private data. Nevertheless, recent studies show that conventional FL algorithms still exhibit deficiencies in privacy protection, and the server lacks a reliable and stable aggregation rule for updating the global model. This situation creates opportunities for adversaries: on the one hand, they may eavesdrop on uploaded gradients or model parameters, potentially leaking benign clients' private data; on the other hand, they may compromise clients to launch poisoning attacks that corrupt the global model. To balance accuracy and security, we propose FedFG, a robust FL framework based on flow-matching generation that simultaneously preserves client privacy and resists sophisticated poisoning attacks. On the client side, each local network is decoupled into a private feature extractor and a public classifier. Each client is further equipped with a flow-matching generator that replaces the extractor when interacting with the server, thereby protecting private features while learning an approximation of the underlying data distribution. Complementing the client-side design, the server employs a client-update verification scheme and a novel robust aggregation mechanism driven by synthetic samples produced by the flow-matching generator. Experiments on MNIST, FMNIST, and CIFAR-10 demonstrate that, compared with prior work, our approach adapts to multiple attack strategies and achieves higher accuracy while maintaining strong privacy protection.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Privacy Preservation
Poisoning Attacks
Model Aggregation
Data Leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

flow-matching generation
federated learning
privacy preservation
robust aggregation
poisoning attack defense
🔎 Similar Papers
No similar papers found.
R
Ruiyang Wang
School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China
Rong Pan
Rong Pan
1. Department of Computer Science, School of Information Science and Technology 2. Software
Data MiningArtificial IntelligenceCollaborative Filtering
Z
Zhengan Yao
School of Mathematics, Sun Yat-sen University, Guangzhou 510275, China, and also with the Institute of Advanced Studies Hong Kong, Sun Yat-sen University, Hong Kong