🤖 AI Summary
To address the degradation of learning utility in federated learning caused by privacy preservation and quantization heterogeneity across devices, this paper proposes a differential privacy-enabled federated learning framework supporting collaborative training with multi-precision clients. The method introduces a novel bounded-distortion stochastic quantizer that minimizes quantization error while guaranteeing ε-differential privacy. It further incorporates cluster-size-adaptive optimization and linearly weighted model aggregation to enhance fusion accuracy for heterogeneous quantized models. Experimental results demonstrate that, under identical privacy budgets, the proposed approach achieves an average 3.2% higher test accuracy and accelerates convergence by approximately 1.8× compared to LaplaceSQ-FL. These improvements significantly alleviate the inherent trade-off among privacy protection, learning utility, and device-level quantization heterogeneity.
📝 Abstract
Federated learning (FL) has emerged as a promising paradigm for distributed machine learning, enabling collaborative training of a global model across multiple local devices without requiring them to share raw data. Despite its advancements, FL is limited by factors such as: (i) privacy risks arising from the unprotected transmission of local model updates to the fusion center (FC) and (ii) decreased learning utility caused by heterogeneity in model quantization resolution across participating devices. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. In this paper, our aim is therefore to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that is designed to simultaneously achieve differential privacy (DP) and minimum quantization error. Notably, the proposed SQ guarantees bounded distortion, unlike other DP approaches. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Numerical simulations validate the benefits of our approach in terms of privacy protection and learning utility compared to the conventional LaplaceSQ-FL algorithm.