๐ค AI Summary
Federated learning faces dual challenges of high communication overhead and privacy leakage. This paper proposes Clover, an efficient, secure, and differentially private federated learning framework. Methodologically, Clover employs a distributed three-server architecture to enable secure aggregation of top-k sparse gradients, integrates lightweight distributed noise generation, and leverages ORAM-optimized privacy-preserving mechanisms, complemented by integrity verification to withstand malicious servers. Its key contributions include: (i) significantly reduced client-side communication costs and server-side computational overhead compared to state-of-the-art approaches; (ii) model utility approaching that of centralized differential privacy baselines; and (iii) robust security guarantees under adversarial server settings. Experimental results demonstrate that Clover outperforms mainstream baseline methods in communication efficiency, end-to-end latency, and the privacyโutility trade-off.
๐ Abstract
Federated learning (FL) enables multiple clients to jointly train a model by sharing only gradient updates for aggregation instead of raw data. Due to the transmission of very high-dimensional gradient updates from many clients, FL is known to suffer from a communication bottleneck. Meanwhile, the gradients shared by clients as well as the trained model may also be exploited for inferring private local datasets, making privacy still a critical concern in FL. We present Clover, a novel system framework for communication-efficient, secure, and differentially private FL. To tackle the communication bottleneck in FL, Clover follows a standard and commonly used approach-top-k gradient sparsification, where each client sparsifies its gradient update such that only k largest gradients (measured by magnitude) are preserved for aggregation. Clover provides a tailored mechanism built out of a trending distributed trust setting involving three servers, which allows to efficiently aggregate multiple sparse vectors (top-k sparsified gradient updates) into a dense vector while hiding the values and indices of non-zero elements in each sparse vector. This mechanism outperforms a baseline built on the general distributed ORAM technique by several orders of magnitude in server-side communication and runtime, with also smaller client communication cost. We further integrate this mechanism with a lightweight distributed noise generation mechanism to offer differential privacy (DP) guarantees on the trained model. To harden Clover with security against a malicious server, we devise a series of lightweight mechanisms for integrity checks on the server-side computation. Extensive experiments show that Clover can achieve utility comparable to vanilla FL with central DP, with promising performance.