🤖 AI Summary
To address the high communication overhead, excessive energy consumption, and unreliable convergence in federated learning (FL) deployment over wireless edge networks, this paper proposes a lightweight joint optimization framework. We establish a closed-form expression for the convergence gap that unifies transmit power control, model pruning, and gradient quantization—marking the first work to jointly optimize these three strategies under dual constraints of latency and energy. Bayesian optimization is further introduced to enable efficient hyperparameter tuning. Extensive experiments on real-world datasets demonstrate that the proposed method reduces communication load by up to 68% and client energy consumption by up to 52% compared to state-of-the-art approaches, while incurring less than 1.2% accuracy degradation. The framework significantly enhances training efficiency and practical feasibility of FL in resource-constrained wireless edge environments.
📝 Abstract
With the exponential growth of smart devices connected to wireless networks, data production is increasing rapidly, requiring machine learning (ML) techniques to unlock its value. However, the centralized ML paradigm raises concerns over communication overhead and privacy. Federated learning (FL) offers an alternative at the network edge, but practical deployment in wireless networks remains challenging. This paper proposes a lightweight FL (LTFL) framework integrating wireless transmission power control, model pruning, and gradient quantization. We derive a closed-form expression of the FL convergence gap, considering transmission error, model pruning error, and gradient quantization error. Based on these insights, we formulate an optimization problem to minimize the convergence gap while meeting delay and energy constraints. To solve the non-convex problem efficiently, we derive closed-form solutions for the optimal model pruning ratio and gradient quantization level, and employ Bayesian optimization for transmission power control. Extensive experiments on real-world datasets show that LTFL outperforms state-of-the-art schemes.