Efficient Full-Stack Private Federated Deep Learning with Post-Quantum Security

📅 2025-05-09
🏛️ IEEE Transactions on Dependable and Secure Computing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) faces critical challenges including high quantum-safe aggregation overhead and privacy-utility imbalance arising from the decoupling of differential privacy (DP) and cryptographic mechanisms. This paper proposes Beskar, an end-to-end quantum-safe FL framework that systematically integrates post-quantum secure aggregation—based on CRYSTALS-Kyber—with multi-stage DP for the first time. Beskar introduces hierarchical noise injection, adaptive noise scheduling, and communication-computation co-optimization across training, aggregation, and deployment phases. It further establishes a unified threat model and a formal security-utility trade-off analysis framework. Experiments on multiple benchmark datasets demonstrate that Beskar achieves accuracy comparable to plaintext FL (within ±0.8%), reduces aggregation overhead by 57%, and simultaneously satisfies both semantic security and (ε,δ)-DP guarantees.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative model training while preserving user data privacy by keeping data local. Despite these advantages, FL remains vulnerable to privacy attacks on user updates and model parameters during training and deployment. Secure aggregation protocols have been proposed to protect user updates by encrypting them, but these methods often incur high computational costs and are not resistant to quantum computers. Additionally, differential privacy (DP) has been used to mitigate privacy leakages, but existing methods focus on secure aggregation or DP, neglecting their potential synergies. To address these gaps, we introduce Beskar, a novel framework that provides post-quantum secure aggregation, optimizes computational overhead for FL settings, and defines a comprehensive threat model that accounts for a wide spectrum of adversaries. We also integrate DP into different stages of FL training to enhance privacy protection in diverse scenarios. Our framework provides a detailed analysis of the trade-offs between security, performance, and model accuracy, representing the first thorough examination of secure aggregation protocols combined with various DP approaches for post-quantum secure FL. Beskar aims to address the pressing privacy and security issues FL while ensuring quantum-safety and robust performance.
Problem

Research questions and friction points this paper is trying to address.

Enhance privacy in federated learning against quantum attacks
Optimize computational costs in secure aggregation protocols
Integrate differential privacy across FL training stages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-quantum secure aggregation for FL
Optimized computational overhead in FL
Integrated DP at multiple FL stages
🔎 Similar Papers
No similar papers found.