Byzantine Outside, Curious Inside: Reconstructing Data Through Malicious Updates

📅 2025-06-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel threat in federated learning (FL)—the “maliciously curious client”—which actively manipulates uploaded gradients via Byzantine behavior to enable cross-client private data reconstruction. We formally define a client-side gradient-driven data reconstruction threat model and propose a joint optimization algorithm integrating gradient inversion with adversarial gradient construction, accompanied by theoretical analysis of reconstructability. Experiments demonstrate high-fidelity reconstruction of other clients’ private images under standard FL settings. Crucially, existing server-side robust aggregation methods (e.g., Krum, Median) and client-side differential privacy mechanisms fail to mitigate this attack; instead, they improve reconstruction quality by 10–15%, exposing a fundamental blind spot in current FL security paradigms. This work challenges prevailing assumptions about the efficacy of mainstream defenses and establishes a new benchmark for privacy risk assessment and defense design in FL.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables decentralized machine learning without sharing raw data, allowing multiple clients to collaboratively learn a global model. However, studies reveal that privacy leakage is possible under commonly adopted FL protocols. In particular, a server with access to client gradients can synthesize data resembling the clients' training data. In this paper, we introduce a novel threat model in FL, named the maliciously curious client, where a client manipulates its own gradients with the goal of inferring private data from peers. This attacker uniquely exploits the strength of a Byzantine adversary, traditionally aimed at undermining model robustness, and repurposes it to facilitate data reconstruction attack. We begin by formally defining this novel client-side threat model and providing a theoretical analysis that demonstrates its ability to achieve significant reconstruction success during FL training. To demonstrate its practical impact, we further develop a reconstruction algorithm that combines gradient inversion with malicious update strategies. Our analysis and experimental results reveal a critical blind spot in FL defenses: both server-side robust aggregation and client-side privacy mechanisms may fail against our proposed attack. Surprisingly, standard server- and client-side defenses designed to enhance robustness or privacy may unintentionally amplify data leakage. Compared to the baseline approach, a mistakenly used defense may instead improve the reconstructed image quality by 10-15%.
Problem

Research questions and friction points this paper is trying to address.

A novel FL threat model: maliciously curious client inferring peer data
Client manipulates gradients to exploit Byzantine strength for data reconstruction
Existing FL defenses fail and may amplify data leakage risks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Malicious client manipulates gradients for data inference
Combines gradient inversion with malicious update strategies
Exploits Byzantine adversary for data reconstruction attacks
🔎 Similar Papers
No similar papers found.