🤖 AI Summary
To address the challenge of simultaneously achieving strong privacy guarantees, low communication overhead, and fast convergence in decentralized learning, this paper proposes ZIP-DL—a novel algorithm that, for the first time, attains rigorous $(varepsilon,delta)$-differential privacy against local adversaries within a single communication round. Its core innovation lies in a structured correlated noise injection mechanism: gradient perturbations at each node are carefully designed so that their correlations enable progressive cancellation during distributed aggregation, thereby significantly improving model accuracy and convergence speed without compromising privacy. Theoretical analysis establishes both its differential privacy guarantee and convergence properties. Experiments demonstrate that ZIP-DL reduces link-attack success rate by 52 percentage points compared to baselines; under equivalent membership inference resistance, it improves test accuracy by 37%; and it cuts communication overhead by 10.5×.
📝 Abstract
This paper introduces ZIP-DL, a novel privacy-aware decentralized learning (DL) algorithm that exploits correlated noise to provide strong privacy protection against a local adversary while yielding efficient convergence guarantees for a low communication cost. The progressive neutralization of the added noise during the distributed aggregation process results in ZIP-DL fostering a high model accuracy under privacy guarantees. ZIP-DL further uses a single communication round between each gradient descent, thus minimizing communication overhead. We provide theoretical guarantees for both convergence speed and privacy guarantees, thereby making ZIP-DL applicable to practical scenarios. Our extensive experimental study shows that ZIP-DL significantly outperforms the state-of-the-art in terms of vulnerability/accuracy trade-off. In particular, ZIP-DL (i) reduces the efficacy of linkability attacks by up to 52 percentage points compared to baseline DL, (ii) improves accuracy by up to 37 percent w.r.t. the state-of-the-art privacy-preserving mechanism operating under the same threat model as ours, when configured to provide the same protection against membership inference attacks, and (iii) reduces communication by up to 10.5x against the same competitor for the same level of protection.