Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of high communication overhead and weak privacy protection in federated learning, this paper proposes CEPAM—a mechanism that jointly achieves gradient compression and tunable differential privacy within a trusted aggregator framework. CEPAM innovatively couples the Rejection Sampling Universal Quantizer (RSUQ) with Gaussian/Laplace noise modeling, establishing the first DP-Compression co-analysis framework enabling client-server collaborative customization of the privacy-accuracy-compression trade-off. We theoretically prove that CEPAM satisfies (ε,δ)-differential privacy and provides analytically guaranteed global utility. Experiments on MNIST demonstrate that, compared to baseline methods, CEPAM significantly reduces communication cost while improving model accuracy—achieving, for the first time, joint, controllable optimization of privacy adaptivity, compression efficiency, and model utility.

Technology Category

Application Category

📝 Abstract
Training machine learning models on decentralized private data via federated learning (FL) poses two key challenges: communication efficiency and privacy protection. In this work, we address these challenges within the trusted aggregator model by introducing a novel approach called the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM), achieving both objectives simultaneously. In particular, CEPAM leverages the rejection-sampled universal quantizer (RSUQ), a construction of randomized vector quantizer whose resulting distortion is equivalent to a prescribed noise, such as Gaussian or Laplace noise, enabling joint differential privacy and compression. Moreover, we analyze the trade-offs among user privacy, global utility, and transmission rate of CEPAM by defining appropriate metrics for FL with differential privacy and compression. Our CEPAM provides the additional benefit of privacy adaptability, allowing clients and the server to customize privacy protection based on required accuracy and protection. We assess CEPAM's utility performance using MNIST dataset, demonstrating that CEPAM surpasses baseline models in terms of learning accuracy.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Communication Efficiency
Privacy Protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

CEPAM
RSUQ
Federated Learning
🔎 Similar Papers
No similar papers found.
C
Chih Wei Ling
Department of Computer Science, City University of Hong Kong, City University of Hong Kong Shenzhen Research Institute
Y
Youqi Wu
Department of Computer Science, The Chinese University of Hong Kong
J
Jiande Sun
School of Information Science and Engineering, Shandong Normal University
Cheuk Ting Li
Cheuk Ting Li
Assistant Professor, Dept of Information Engineering, CUHK
Information Theory
Linqi Song
Linqi Song
Associate Professor, Department of Computer Science, City University of Hong Kong
Information TheoryFederated LearningNatural Language Processing
Weitao Xu
Weitao Xu
Associate Professor at City University of Hong Kong
IoTMobile computingSecurityLPWANLLM