Optimizing Communication and Device Clustering for Clustered Federated Learning with Differential Privacy

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses clustering federated learning (CFL) under communication constraints and privacy sensitivity in heterogeneous base station networks with non-IID data. We jointly optimize device clustering, wireless resource block (RB) allocation, differential privacy (DP) noise injection, and model transmission latency. A value-decomposition-based multi-agent reinforcement learning (VDN-MARL) framework is proposed, incorporating a dynamic penalty function to adaptively constrain the number of communication-violating devices, enabling distributed, autonomous decision-making at base stations. Crucially, the DP mechanism is embedded into the MARL action space to co-optimize privacy budget allocation and radio resource scheduling. Experiments demonstrate that, compared to independent Q-learning, the proposed method achieves up to 20% faster convergence, a 15% higher cumulative reward, and significantly reduced global training loss—effectively balancing privacy preservation, communication efficiency, and model accuracy.

Technology Category

Application Category

📝 Abstract
In this paper, a secure and communication-efficient clustered federated learning (CFL) design is proposed. In our model, several base stations (BSs) with heterogeneous task-handling capabilities and multiple users with non-independent and identically distributed (non-IID) data jointly perform CFL training incorporating differential privacy (DP) techniques. Since each BS can process only a subset of the learning tasks and has limited wireless resource blocks (RBs) to allocate to users for federated learning (FL) model parameter transmission, it is necessary to jointly optimize RB allocation and user scheduling for CFL performance optimization. Meanwhile, our considered CFL method requires devices to use their limited data and FL model information to determine their task identities, which may introduce additional communication overhead. We formulate an optimization problem whose goal is to minimize the training loss of all learning tasks while considering device clustering, RB allocation, DP noise, and FL model transmission delay. To solve the problem, we propose a novel dynamic penalty function assisted value decomposed multi-agent reinforcement learning (DPVD-MARL) algorithm that enables distributed BSs to independently determine their connected users, RBs, and DP noise of the connected users but jointly minimize the training loss of all learning tasks across all BSs. Different from the existing MARL methods that assign a large penalty for invalid actions, we propose a novel penalty assignment scheme that assigns penalty depending on the number of devices that cannot meet communication constraints (e.g., delay), which can guide the MARL scheme to quickly find valid actions, thus improving the convergence speed. Simulation results show that the DPVD-MARL can improve the convergence rate by up to 20% and the ultimate accumulated rewards by 15% compared to independent Q-learning.
Problem

Research questions and friction points this paper is trying to address.

Optimize RB allocation and user scheduling for CFL performance
Minimize training loss considering device clustering and DP noise
Reduce communication overhead in device clustering for CFL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic penalty function assisted MARL algorithm
Joint optimization of RB allocation and user scheduling
Differential privacy integrated clustered federated learning
🔎 Similar Papers