Privacy-Preserving Quantized Federated Learning with Diverse Precision

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of learning utility in federated learning caused by privacy preservation and quantization heterogeneity across devices, this paper proposes a differential privacy-enabled federated learning framework supporting collaborative training with multi-precision clients. The method introduces a novel bounded-distortion stochastic quantizer that minimizes quantization error while guaranteeing ε-differential privacy. It further incorporates cluster-size-adaptive optimization and linearly weighted model aggregation to enhance fusion accuracy for heterogeneous quantized models. Experimental results demonstrate that, under identical privacy budgets, the proposed approach achieves an average 3.2% higher test accuracy and accelerates convergence by approximately 1.8× compared to LaplaceSQ-FL. These improvements significantly alleviate the inherent trade-off among privacy protection, learning utility, and device-level quantization heterogeneity.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) has emerged as a promising paradigm for distributed machine learning, enabling collaborative training of a global model across multiple local devices without requiring them to share raw data. Despite its advancements, FL is limited by factors such as: (i) privacy risks arising from the unprotected transmission of local model updates to the fusion center (FC) and (ii) decreased learning utility caused by heterogeneity in model quantization resolution across participating devices. Prior work typically addresses only one of these challenges because maintaining learning utility under both privacy risks and quantization heterogeneity is a non-trivial task. In this paper, our aim is therefore to improve the learning utility of a privacy-preserving FL that allows clusters of devices with different quantization resolutions to participate in each FL round. Specifically, we introduce a novel stochastic quantizer (SQ) that is designed to simultaneously achieve differential privacy (DP) and minimum quantization error. Notably, the proposed SQ guarantees bounded distortion, unlike other DP approaches. To address quantization heterogeneity, we introduce a cluster size optimization technique combined with a linear fusion approach to enhance model aggregation accuracy. Numerical simulations validate the benefits of our approach in terms of privacy protection and learning utility compared to the conventional LaplaceSQ-FL algorithm.
Problem

Research questions and friction points this paper is trying to address.

Privacy risks in federated learning model updates
Learning utility loss from quantization heterogeneity
Balancing differential privacy and quantization accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Novel stochastic quantizer for differential privacy
Cluster size optimization for quantization heterogeneity
Linear fusion approach for accurate model aggregation
🔎 Similar Papers
No similar papers found.
D
Dang Qua Nguyen
Department of Electrical Engineering and Computer Science, The University of Kansas, Lawrence, KS 66045 USA
Morteza Hashemi
Morteza Hashemi
Assistant Professor, EECS, University of Kansas (KU)
Communication NetworksWireless NetworkingmmWave CommunicationsCyber-physical Systems
Erik Perrins
Erik Perrins
Professor, Department of Electrical Engineering & Computer Science, University of Kansas
Communication Theory
S
Sergiy A. Vorobyov
Department of Information and Communications Engineering, Aalto University, 02150 Espoo, Finland
D
David J. Love
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907 USA
Taejoon Kim
Taejoon Kim
Professor of The School of ECEE, Arizona State University
Wireless CommunicationsSignal ProcessingOptimization