Unified Privacy Guarantees for Decentralized Learning via Matrix Factorization

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Decentralized learning (DL) avoids raw data sharing but suffers from suboptimal privacy–utility trade-offs compared to centralized training due to coarse-grained differential privacy (DP) accounting. To address this, we propose MAFALDA-SGD—a novel framework that introduces matrix factorization into DL privacy analysis for the first time. It unifies modeling of algorithmic dynamics and network trust topology while explicitly capturing temporal correlations among injected noise, enabling tighter user-level DP accounting. Leveraging a peer-to-peer noise propagation mechanism and a graph-based collaborative learning protocol, MAFALDA-SGD significantly enhances privacy guarantees: on synthetic and real-world graph datasets, it achieves 3.2–7.8% higher test accuracy under the same privacy budget (ε, δ), or reduces ε by up to 40% for equivalent utility. This work establishes the first scalable, theoretically rigorous, and practically efficient DP accounting paradigm for decentralized learning.

Technology Category

Application Category

📝 Abstract
Decentralized Learning (DL) enables users to collaboratively train models without sharing raw data by iteratively averaging local updates with neighbors in a network graph. This setting is increasingly popular for its scalability and its ability to keep data local under user control. Strong privacy guarantees in DL are typically achieved through Differential Privacy (DP), with results showing that DL can even amplify privacy by disseminating noise across peer-to-peer communications. Yet in practice, the observed privacy-utility trade-off often appears worse than in centralized training, which may be due to limitations in current DP accounting methods for DL. In this paper, we show that recent advances in centralized DP accounting based on Matrix Factorization (MF) for analyzing temporal noise correlations can also be leveraged in DL. By generalizing existing MF results, we show how to cast both standard DL algorithms and common trust models into a unified formulation. This yields tighter privacy accounting for existing DP-DL algorithms and provides a principled way to develop new ones. To demonstrate the approach, we introduce MAFALDA-SGD, a gossip-based DL algorithm with user-level correlated noise that outperforms existing methods on synthetic and real-world graphs.
Problem

Research questions and friction points this paper is trying to address.

Improving privacy-utility trade-offs in decentralized learning systems
Extending matrix factorization methods for differential privacy accounting
Developing tighter privacy guarantees for peer-to-peer training algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Matrix Factorization for decentralized privacy accounting
Unified formulation for algorithms and trust models
MAFALDA-SGD algorithm with correlated noise outperforms
🔎 Similar Papers
No similar papers found.