AdeptHEQ-FL: Adaptive Homomorphic Encryption for Federated Learning of Hybrid Classical-Quantum Models with Dynamic Layer Sparing

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of simultaneously ensuring privacy preservation, model performance, and communication efficiency in non-IID federated learning, this paper proposes the first classical-quantum hybrid federated learning framework. The framework integrates a convolutional neural network (CNN) with parameterized quantum circuits (PQCs) and incorporates three key mechanisms: differential privacy–based weighted aggregation, selective homomorphic encryption, and dynamic layer freezing. These components jointly guarantee rigorous privacy—namely, ε-differential privacy and semantic security—while enhancing quantum representation capability and training efficiency. Extensive experiments on CIFAR-10 demonstrate that our method achieves 25.43% and 14.17% higher accuracy than Standard-FedQNN and FHE-FedQNN, respectively, while reducing communication overhead by approximately 40%. To the best of our knowledge, this is the first work to achieve a unified optimization of strong privacy guarantees, high model accuracy, and low communication cost in non-IID quantum-enhanced federated learning.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) faces inherent challenges in balancing model performance, privacy preservation, and communication efficiency, especially in non-IID decentralized environments. Recent approaches either sacrifice formal privacy guarantees, incur high overheads, or overlook quantum-enhanced expressivity. We introduce AdeptHEQ-FL, a unified hybrid classical-quantum FL framework that integrates (i) a hybrid CNN-PQC architecture for expressive decentralized learning, (ii) an adaptive accuracy-weighted aggregation scheme leveraging differentially private validation accuracies, (iii) selective homomorphic encryption (HE) for secure aggregation of sensitive model layers, and (iv) dynamic layer-wise adaptive freezing to minimize communication overhead while preserving quantum adaptability. We establish formal privacy guarantees, provide convergence analysis, and conduct extensive experiments on the CIFAR-10, SVHN, and Fashion-MNIST datasets. AdeptHEQ-FL achieves a $approx 25.43%$ and $approx 14.17%$ accuracy improvement over Standard-FedQNN and FHE-FedQNN, respectively, on the CIFAR-10 dataset. Additionally, it reduces communication overhead by freezing less important layers, demonstrating the efficiency and practicality of our privacy-preserving, resource-aware design for FL.
Problem

Research questions and friction points this paper is trying to address.

Balancing model performance, privacy, and efficiency in Federated Learning
Addressing quantum-enhanced expressivity and privacy in hybrid FL models
Reducing communication overhead while preserving privacy in FL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid CNN-PQC architecture for expressive learning
Selective homomorphic encryption for secure aggregation
Dynamic layer-wise adaptive freezing reduces overhead
🔎 Similar Papers
No similar papers found.