Privacy-Preserving Decentralized Federated Learning via Explainable Adaptive Differential Privacy

📅 2025-09-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In decentralized federated learning (DFL), model updates are vulnerable to inference and membership inference attacks, with repeated client exchanges exacerbating privacy leakage. Conventional differential privacy (DP) methods—operating as black-box training procedures—cannot track historical noise accumulation, forcing excessive noise injection that degrades model accuracy. This paper proposes PrivateDFL, the first interpretable, adaptive DP framework for serverless DFL. It innovatively integrates hyperdimensional computing with DP to establish an auditable, cumulative noise-tracking mechanism, enabling each client to inject only the minimal incremental noise required—thereby avoiding redundant perturbation. PrivateDFL provides formal $(varepsilon,delta)$-DP guarantees and is optimized for resource-constrained IoT devices. Evaluations on MNIST, ISOLET, and UCI-HAR under non-IID settings demonstrate >80% accuracy improvement, ~10× reduction in training time, and 76× lower inference latency and 11× lower energy consumption versus baselines.

Technology Category

Application Category

📝 Abstract
Decentralized federated learning faces privacy risks because model updates can leak data through inference attacks and membership inference, a concern that grows over many client exchanges. Differential privacy offers principled protection by injecting calibrated noise so confidential information remains secure on resource-limited IoT devices. Yet without transparency, black-box training cannot track noise already injected by previous clients and rounds, which forces worst-case additions and harms accuracy. We propose PrivateDFL, an explainable framework that joins hyperdimensional computing with differential privacy and keeps an auditable account of cumulative noise so each client adds only the difference between the required noise and what has already been accumulated. We evaluate on MNIST, ISOLET, and UCI-HAR to span image, signal, and tabular modalities, and we benchmark against transformer-based and deep learning-based baselines trained centrally with Differentially Private Stochastic Gradient Descent (DP-SGD) and Renyi Differential Privacy (RDP). PrivateDFL delivers higher accuracy, lower latency, and lower energy across IID and non-IID partitions while preserving formal (epsilon, delta) guarantees and operating without a central server. For example, under non-IID partitions, PrivateDFL achieves 24.42% higher accuracy than the Vision Transformer on MNIST while using about 10x less training time, 76x lower inference latency, and 11x less energy, and on ISOLET it exceeds Transformer accuracy by more than 80% with roughly 10x less training time, 40x lower inference latency, and 36x less training energy. Future work will extend the explainable accounting to adversarial clients and adaptive topologies with heterogeneous privacy budgets.
Problem

Research questions and friction points this paper is trying to address.

Mitigating privacy risks from model updates in decentralized federated learning
Addressing accuracy loss from worst-case noise addition in black-box training
Enabling auditable cumulative noise tracking for resource-constrained IoT devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explainable adaptive differential privacy framework
Auditable cumulative noise accounting system
Hyperdimensional computing integration for efficiency
🔎 Similar Papers
No similar papers found.
F
Fardin Jalil Piran
School of Mechanical, Aerospace, and Manufacturing Engineering, University of Connecticut, Storrs, CT 06269
Z
Zhiling Chen
School of Mechanical, Aerospace, and Manufacturing Engineering, University of Connecticut, Storrs, CT 06269
Y
Yang Zhang
School of Mechanical, Aerospace, and Manufacturing Engineering, University of Connecticut, Storrs, CT 06269
Qianyu Zhou
Qianyu Zhou
The University of Tokyo
Computer VisionTransfer LearningDomain GeneralizationDomain AdaptationAnti-Spoofing
Jiong Tang
Jiong Tang
School of Mechanical, Aerospace, and Manufacturing Engineering, University of Connecticut, Storrs, CT 06269
F
Farhad Imani
School of Mechanical, Aerospace, and Manufacturing Engineering, University of Connecticut, Storrs, CT 06269