Local Differential Privacy for Federated Learning with Fixed Memory Usage and Per-Client Privacy

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), client updates and the global model can inadvertently leak sensitive data, posing compliance risks under regulations such as HIPAA and GDPR—especially in healthcare. Existing local differential privacy (LDP) approaches suffer from excessive resource overhead and fail to guarantee privacy under asynchronous participation, rendering them impractical for high-stakes domains. To address these limitations, we propose L-RDP, a lightweight, rigorously defined LDP mechanism tailored for FL. L-RDP features fixed and significantly reduced memory footprint to mitigate client dropout; enables precise, cumulative privacy budget accounting under asynchronous participation; and operates without centralized trust assumptions. Experiments demonstrate that, while strictly enforcing ε-differential privacy, L-RDP improves model generalization and cross-client fairness, and simultaneously reduces both communication and computational overhead. Thus, L-RDP delivers a verifiable, scalable, and regulation-compliant privacy-preserving framework for deploying FL in highly sensitive application domains.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables organizations to collaboratively train models without sharing their datasets. Despite this advantage, recent studies show that both client updates and the global model can leak private information, limiting adoption in sensitive domains such as healthcare. Local differential privacy (LDP) offers strong protection by letting each participant privatize updates before transmission. However, existing LDP methods were designed for centralized training and introduce challenges in FL, including high resource demands that can cause client dropouts and the lack of reliable privacy guarantees under asynchronous participation. These issues undermine model generalizability, fairness, and compliance with regulations such as HIPAA and GDPR. To address them, we propose L-RDP, a DP method designed for LDP that ensures constant, lower memory usage to reduce dropouts and provides rigorous per-client privacy guarantees by accounting for intermittent participation.
Problem

Research questions and friction points this paper is trying to address.

Addressing privacy leakage risks in federated learning client updates
Reducing client dropouts caused by high resource demands
Providing reliable privacy guarantees under asynchronous participation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LDP method ensuring constant low memory usage
Provides rigorous per-client privacy guarantees
Addresses intermittent participation in federated learning