TT-Prune: Joint Model Pruning and Resource Allocation for Communication-efficient Time-triggered Federated Learning

๐Ÿ“… 2025-11-06
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address high communication overhead and prolonged training latency in time-triggered federated learning (TT-Fed) caused by device-scale expansion and limited wireless bandwidth, this paper proposes a joint optimization framework of adaptive model pruning and bandwidth allocation. We first establish a convergence analysis for TT-Fed based on the โ„“โ‚‚-norm of local gradients, rigorously deriving an upper bound on the training loss. Leveraging Lagrange multipliers and the Karushโ€“Kuhnโ€“Tucker (KKT) conditions, we obtain closed-form optimal solutions for both pruning ratios and bandwidth allocation. Experiments demonstrate that the proposed method reduces communication cost by 40% while preserving model accuracy and significantly shortening end-to-end training latency. Our work introduces a provably convergent, joint optimization paradigm tailored for resource-constrained TT-Fed systems.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated learning (FL) offers new opportunities in machine learning, particularly in addressing data privacy concerns. In contrast to conventional event-based federated learning, time-triggered federated learning (TT-Fed), as a general form of both asynchronous and synchronous FL, clusters users into different tiers based on fixed time intervals. However, the FL network consists of a growing number of user devices with limited wireless bandwidth, consequently magnifying issues such as stragglers and communication overhead. In this paper, we introduce adaptive model pruning to wireless TT-Fed systems and study the problem of jointly optimizing the pruning ratio and bandwidth allocation to minimize the training loss while ensuring minimal learning latency. To answer this question, we perform convergence analysis on the gradient l_2 norm of the TT-Fed model based on model pruning. Based on the obtained convergence upper bound, a joint optimization problem of pruning ratio and wireless bandwidth is formulated to minimize the model training loss under a given delay threshold. Then, we derive closed-form solutions for wireless bandwidth and pruning ratio using Karush-Kuhn-Tucker(KKT) conditions. The simulation results show that model pruning could reduce the communication cost by 40% while maintaining the model performance at the same level.
Problem

Research questions and friction points this paper is trying to address.

Optimizing pruning ratio and bandwidth allocation in federated learning
Minimizing training loss while ensuring minimal learning latency
Reducing communication costs while maintaining model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly optimizes model pruning and bandwidth allocation
Uses KKT conditions for closed-form optimization solutions
Reduces communication costs by 40% while maintaining performance
๐Ÿ”Ž Similar Papers
No similar papers found.