🤖 AI Summary
This work proposes a novel method for distributed non-convex learning over undirected networks that simultaneously achieves communication efficiency and rigorous differential privacy guarantees. By performing multiple local training rounds between communications, the algorithm reduces communication frequency, while integrating gradient clipping with additive noise in gradient updates to ensure strong privacy protection of raw data. Theoretical analysis demonstrates that under a fixed privacy budget, the algorithm converges to a neighborhood of a stationary point of the objective function. Experimental results on classification tasks show that the proposed approach outperforms existing state-of-the-art methods, marking the first framework in non-convex distributed learning to jointly achieve efficient communication, provable differential privacy, and convergence guarantees.
📝 Abstract
We address nonconvex learning problems over undirected networks. In particular, we focus on the challenge of designing an algorithm that is both communication-efficient and that guarantees the privacy of the agents' data. The first goal is achieved through a local training approach, which reduces communication frequency. The second goal is achieved by perturbing gradients during local training, specifically through gradient clipping and additive noise. We prove that the resulting algorithm converges to a stationary point of the problem within a bounded distance. Additionally, we provide theoretical privacy guarantees within a differential privacy framework that ensure agents' training data cannot be inferred from the trained model shared over the network. We show the algorithm's superior performance on a classification task under the same privacy budget, compared with state-of-the-art methods.