Harnessing Sparsification in Federated Learning: A Secure, Efficient, and Differentially Private Realization

๐Ÿ“… 2025-11-10
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Federated learning faces dual challenges of high communication overhead and privacy leakage. This paper proposes Clover, an efficient, secure, and differentially private federated learning framework. Methodologically, Clover employs a distributed three-server architecture to enable secure aggregation of top-k sparse gradients, integrates lightweight distributed noise generation, and leverages ORAM-optimized privacy-preserving mechanisms, complemented by integrity verification to withstand malicious servers. Its key contributions include: (i) significantly reduced client-side communication costs and server-side computational overhead compared to state-of-the-art approaches; (ii) model utility approaching that of centralized differential privacy baselines; and (iii) robust security guarantees under adversarial server settings. Experimental results demonstrate that Clover outperforms mainstream baseline methods in communication efficiency, end-to-end latency, and the privacyโ€“utility trade-off.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated learning (FL) enables multiple clients to jointly train a model by sharing only gradient updates for aggregation instead of raw data. Due to the transmission of very high-dimensional gradient updates from many clients, FL is known to suffer from a communication bottleneck. Meanwhile, the gradients shared by clients as well as the trained model may also be exploited for inferring private local datasets, making privacy still a critical concern in FL. We present Clover, a novel system framework for communication-efficient, secure, and differentially private FL. To tackle the communication bottleneck in FL, Clover follows a standard and commonly used approach-top-k gradient sparsification, where each client sparsifies its gradient update such that only k largest gradients (measured by magnitude) are preserved for aggregation. Clover provides a tailored mechanism built out of a trending distributed trust setting involving three servers, which allows to efficiently aggregate multiple sparse vectors (top-k sparsified gradient updates) into a dense vector while hiding the values and indices of non-zero elements in each sparse vector. This mechanism outperforms a baseline built on the general distributed ORAM technique by several orders of magnitude in server-side communication and runtime, with also smaller client communication cost. We further integrate this mechanism with a lightweight distributed noise generation mechanism to offer differential privacy (DP) guarantees on the trained model. To harden Clover with security against a malicious server, we devise a series of lightweight mechanisms for integrity checks on the server-side computation. Extensive experiments show that Clover can achieve utility comparable to vanilla FL with central DP, with promising performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses communication bottlenecks in federated learning via gradient sparsification
Enhances privacy protection using differential privacy and secure aggregation
Secures federated learning against malicious servers with integrity checks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses top-k gradient sparsification for efficiency
Employs three-server trust setting for security
Integrates distributed noise generation for privacy
๐Ÿ”Ž Similar Papers
No similar papers found.