Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical vulnerability: local differential privacy (LDP) fails to provably defend against membership inference attacks (MIAs) in federated visual learning. Addressing the lack of theoretical guarantees for MIA under LDP, we establish the first polynomial-time lower bound on MIA success probability against LDP-protected data, rigorously proving an intrinsic trade-off between privacy budget ε and attack feasibility—refuting the common assumption that LDP inherently mitigates MIAs. Our approach integrates information-theoretic analysis, construction of low-degree polynomial adversaries, and vulnerability modeling of fully connected and self-attention layers. Empirical evaluation on CIFAR-10 and Fashion-MNIST shows that even with ε = 2, MIA accuracy exceeds 72%; conversely, strengthening noise to resist such attacks degrades model accuracy by over 40%.

Technology Category

Application Category

📝 Abstract
Federated Learning enables collaborative learning among clients via a coordinating server while avoiding direct data sharing, offering a perceived solution to preserve privacy. However, recent studies on Membership Inference Attacks (MIAs) have challenged this notion, showing high success rates against unprotected training data. While local differential privacy (LDP) is widely regarded as a gold standard for privacy protection in data analysis, most studies on MIAs either neglect LDP or fail to provide theoretical guarantees for attack success rates against LDP-protected data. To address this gap, we derive theoretical lower bounds for the success rates of low-polynomial time MIAs that exploit vulnerabilities in fully connected or self-attention layers. We establish that even when data are protected by LDP, privacy risks persist, depending on the privacy budget. Practical evaluations on federated vision models confirm considerable privacy risks, revealing that the noise required to mitigate these attacks significantly degrades models' utility.
Problem

Research questions and friction points this paper is trying to address.

Theoretical analysis of inference attacks on LDP-protected federated learning
Exploring vulnerabilities in fully connected and self-attention layers
Assessing privacy risks and utility trade-offs in federated vision models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Theoretical bounds for MIA success rates
Exploit vulnerabilities in neural network layers
Evaluate LDP impact on model utility
🔎 Similar Papers
No similar papers found.