ParaAegis: Parallel Protection for Flexible Privacy-preserved Federated Learning

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), existing privacy-preserving mechanisms—such as differential privacy (DP) and homomorphic encryption (HE)—struggle to simultaneously ensure strong privacy guarantees, high model utility, and computational efficiency, resulting in rigid trade-offs that hinder practical deployment. To address this, we propose ParaAegis, a tunable parallel security framework. It partitions global models across clients, achieves block-wise consensus via distributed voting, and synergistically integrates lightweight DP with HE to dynamically co-adapt privacy strength and computational overhead. Crucially, ParaAegis decouples privacy protection into independently executable, parallelizable subtasks, enabling on-demand balancing among prediction accuracy, training speed, and privacy budget. Experiments demonstrate that, under identical privacy guarantees, ParaAegis flexibly improves test accuracy by up to 12.3% or reduces training time by up to 41%, significantly enhancing adaptability and practicality of FL systems in heterogeneous environments.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) faces a critical dilemma: existing protection mechanisms like differential privacy (DP) and homomorphic encryption (HE) enforce a rigid trade-off, forcing a choice between model utility and computational efficiency. This lack of flexibility hinders the practical implementation. To address this, we introduce ParaAegis, a parallel protection framework designed to give practitioners flexible control over the privacy-utility-efficiency balance. Our core innovation is a strategic model partitioning scheme. By applying lightweight DP to the less critical, low norm portion of the model while protecting the remainder with HE, we create a tunable system. A distributed voting mechanism ensures consensus on this partitioning. Theoretical analysis confirms the adjustments between efficiency and utility with the same privacy. Crucially, the experimental results demonstrate that by adjusting the hyperparameters, our method enables flexible prioritization between model accuracy and training time.
Problem

Research questions and friction points this paper is trying to address.

Balancing privacy, utility, and efficiency in federated learning
Providing flexible control over privacy-utility-efficiency trade-off
Enabling adjustable prioritization between accuracy and training time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Parallel protection framework for flexible FL
Strategic model partitioning with DP and HE
Distributed voting mechanism for consensus
🔎 Similar Papers
No similar papers found.
Z
Zihou Wu
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
Y
Yuecheng Li
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
T
Tianchi Liao
School of Software Engineering, Sun Yat-sen University, Zhuhai, China
J
Jian Lou
School of Software Engineering, Sun Yat-sen University, Zhuhai, China
Chuan Chen
Chuan Chen
University of Wisconsin, Madison
Applied Microeconomics