🤖 AI Summary
To address resource heterogeneity and privacy sensitivity in edge devices, this paper proposes an efficient and secure federated learning framework that jointly optimizes training efficiency and privacy protection. Methodologically, it introduces a hierarchical asynchronous aggregation mechanism, a differential-privacy-driven opportunistic model update strategy, and threshold homomorphic encryption-enabled encrypted aggregation. It further pioneers the integration of frequency-domain poisoning detection into the encrypted global aggregation pipeline and incorporates resource-aware scheduling. Evaluated on a real-world platform, the framework achieves comparable model accuracy to baseline methods while reducing communication overhead by 37% and encrypted aggregation latency by 52%, and effectively defending against both data poisoning and inference attacks. This work is the first to deeply integrate opportunistic updates with hierarchical privacy preservation, thereby simultaneously achieving high efficiency, strong security, and robustness.
📝 Abstract
Efficient and secure federated learning (FL) is a critical challenge for resource-limited devices, especially mobile devices. Existing secure FL solutions commonly incur significant overhead, leading to a contradiction between efficiency and security. As a result, these two concerns are typically addressed separately. This paper proposes Opportunistic Federated Learning (OFL), a novel FL framework designed explicitly for resource-heterogenous and privacy-aware FL devices, solving efficiency and security problems jointly. OFL optimizes resource utilization and adaptability across diverse devices by adopting a novel hierarchical and asynchronous aggregation strategy. OFL provides strong security by introducing a differentially private and opportunistic model updating mechanism for intra-cluster model aggregation and an advanced threshold homomorphic encryption scheme for inter-cluster aggregation. Moreover, OFL secures global model aggregation by implementing poisoning attack detection using frequency analysis while keeping models encrypted. We have implemented OFL in a real-world testbed and evaluated OFL comprehensively. The evaluation results demonstrate that OFL achieves satisfying model performance and improves efficiency and security, outperforming existing solutions.