OFL: Opportunistic Federated Learning for Resource-Heterogeneous and Privacy-Aware Devices

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address resource heterogeneity and privacy sensitivity in edge devices, this paper proposes an efficient and secure federated learning framework that jointly optimizes training efficiency and privacy protection. Methodologically, it introduces a hierarchical asynchronous aggregation mechanism, a differential-privacy-driven opportunistic model update strategy, and threshold homomorphic encryption-enabled encrypted aggregation. It further pioneers the integration of frequency-domain poisoning detection into the encrypted global aggregation pipeline and incorporates resource-aware scheduling. Evaluated on a real-world platform, the framework achieves comparable model accuracy to baseline methods while reducing communication overhead by 37% and encrypted aggregation latency by 52%, and effectively defending against both data poisoning and inference attacks. This work is the first to deeply integrate opportunistic updates with hierarchical privacy preservation, thereby simultaneously achieving high efficiency, strong security, and robustness.

Technology Category

Application Category

📝 Abstract
Efficient and secure federated learning (FL) is a critical challenge for resource-limited devices, especially mobile devices. Existing secure FL solutions commonly incur significant overhead, leading to a contradiction between efficiency and security. As a result, these two concerns are typically addressed separately. This paper proposes Opportunistic Federated Learning (OFL), a novel FL framework designed explicitly for resource-heterogenous and privacy-aware FL devices, solving efficiency and security problems jointly. OFL optimizes resource utilization and adaptability across diverse devices by adopting a novel hierarchical and asynchronous aggregation strategy. OFL provides strong security by introducing a differentially private and opportunistic model updating mechanism for intra-cluster model aggregation and an advanced threshold homomorphic encryption scheme for inter-cluster aggregation. Moreover, OFL secures global model aggregation by implementing poisoning attack detection using frequency analysis while keeping models encrypted. We have implemented OFL in a real-world testbed and evaluated OFL comprehensively. The evaluation results demonstrate that OFL achieves satisfying model performance and improves efficiency and security, outperforming existing solutions.
Problem

Research questions and friction points this paper is trying to address.

Efficient and secure federated learning for resource-limited devices.
Jointly addressing efficiency and security in federated learning.
Optimizing resource utilization and adaptability across diverse devices.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical asynchronous aggregation strategy
Differentially private opportunistic updating
Threshold homomorphic encryption scheme
🔎 Similar Papers
No similar papers found.
Yunlong Mao
Yunlong Mao
Nanjing University
SecurityPrivacyMachine Learning
M
Mingyang Niu
Nanjing University
Z
Ziqin Dang
Nanjing University
C
Chengxi Li
Nanjing University
H
Hanning Xia
Nanjing University
Y
Yuejuan Zhu
Nanjing University
H
Haoyu Bian
Nanjing University
Y
Yuan Zhang
Nanjing University
J
Jingyu Hua
Nanjing University
Sheng Zhong
Sheng Zhong
Nanjing University
computer networkssecurity and privacytheory of computing