Energy and Memory-Efficient Federated Learning With Ordered Layer Freezing

📅 2025-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency and poor scalability of federated learning (FL) for training deep models on resource-constrained IoT edge devices—characterized by limited computation, memory, and bandwidth—this paper proposes Ordered Layer Freezing (OLF), a novel parameter freezing mechanism that selectively freezes deeper network layers in a predefined order prior to training, substantially reducing computational cost, memory footprint, and energy consumption. Additionally, we introduce Tensor Operation Approximation (TOA), a lightweight, quantization-free approximation technique for tensor operations that achieves model compression while better preserving accuracy than conventional quantization. The method is further enhanced with FL framework optimizations and strategies tailored for non-IID data distributions. Extensive experiments on EMNIST, CIFAR-10/100, and CINIC-10 demonstrate that our approach outperforms state-of-the-art methods, achieving up to a 6.4% absolute accuracy gain, alongside significant improvements in energy efficiency and memory efficiency.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) has emerged as a privacy-preserving paradigm for training machine learning models across distributed edge devices in the Internet of Things (IoT). By keeping data local and coordinating model training through a central server, FL effectively addresses privacy concerns and reduces communication overhead. However, the limited computational power, memory, and bandwidth of IoT edge devices pose significant challenges to the efficiency and scalability of FL, especially when training deep neural networks. Various FL frameworks have been proposed to reduce computation and communication overheads through dropout or layer freezing. However, these approaches often sacrifice accuracy or neglect memory constraints. To this end, in this work, we introduce Federated Learning with Ordered Layer Freezing (FedOLF). FedOLF consistently freezes layers in a predefined order before training, significantly mitigating computation and memory requirements. To further reduce communication and energy costs, we incorporate Tensor Operation Approximation (TOA), a lightweight alternative to conventional quantization that better preserves model accuracy. Experimental results demonstrate that over non-iid data, FedOLF achieves at least 0.3%, 6.4%, 5.81%, 4.4%, 6.27% and 1.29% higher accuracy than existing works respectively on EMNIST (with CNN), CIFAR-10 (with AlexNet), CIFAR-100 (with ResNet20 and ResNet44), and CINIC-10 (with ResNet20 and ResNet44), along with higher energy efficiency and lower memory footprint.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational and memory demands in federated learning for IoT devices
Improving energy efficiency and communication overhead in distributed model training
Enhancing model accuracy while addressing resource constraints on edge devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ordered layer freezing reduces computation and memory needs
Tensor Operation Approximation cuts communication and energy costs
Lightweight alternative to quantization maintains model accuracy
🔎 Similar Papers
No similar papers found.
Z
Ziru Niu
School of Computing Technologies, RMIT University, Melbourne, VIC 3000, Australia
Hai Dong
Hai Dong
School of Computing Technologies, RMIT University
Service-Oriented ComputingEdge IntelligenceBlockchainAI SecurityCyber Security
A
A. K. Qin
Department of Computing Technologies, Swinburne University of Technology, Hawthorn, VIC 3122, Australia
T
Tao Gu
Department of Computing, Macquarie University, Sydney, New South Wales, Australia
Pengcheng Zhang
Pengcheng Zhang
Beihang University
computer vision