On the Detectability of Active Gradient Inversion Attacks in Federated Learning

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), malicious servers can launch stealthy active gradient inversion attacks (GIAs) to reconstruct clients’ private data from uploaded gradients, compromising privacy while evading detection by existing defenses. This work presents the first systematic evaluation of the detectability of four state-of-the-art active GIAs. We propose a lightweight, protocol-agnostic client-side detection framework that requires no modifications to the FL protocol. Our method jointly leverages statistical anomaly detection, weight-structure monitoring, and dynamic modeling of loss-gradient trajectories to enable real-time identification of anomalous server behavior. Extensive experiments across diverse FL configurations demonstrate that our approach achieves high detection accuracy (>98%) against all four attack variants—significantly outperforming baseline methods. To the best of our knowledge, this is the first practical, deployable detection framework specifically designed to defend against covert active GIAs in FL, advancing the state of privacy-preserving distributed learning.

Technology Category

Application Category

📝 Abstract
One of the key advantages of Federated Learning (FL) is its ability to collaboratively train a Machine Learning (ML) model while keeping clients'data on-site. However, this can create a false sense of security. Despite not sharing private data increases the overall privacy, prior studies have shown that gradients exchanged during the FL training remain vulnerable to Gradient Inversion Attacks (GIAs). These attacks allow reconstructing the clients'local data, breaking the privacy promise of FL. GIAs can be launched by either a passive or an active server. In the latter case, a malicious server manipulates the global model to facilitate data reconstruction. While effective, earlier attacks falling under this category have been demonstrated to be detectable by clients, limiting their real-world applicability. Recently, novel active GIAs have emerged, claiming to be far stealthier than previous approaches. This work provides the first comprehensive analysis of these claims, investigating four state-of-the-art GIAs. We propose novel lightweight client-side detection techniques, based on statistically improbable weight structures and anomalous loss and gradient dynamics. Extensive evaluation across several configurations demonstrates that our methods enable clients to effectively detect active GIAs without any modifications to the FL training protocol.
Problem

Research questions and friction points this paper is trying to address.

Detecting stealthy active gradient inversion attacks in federated learning systems
Analyzing client-side detection of manipulated global model parameters
Identifying statistically anomalous weight structures and gradient dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detect attacks using improbable weight structures
Monitor anomalous loss and gradient dynamics
Lightweight client-side detection without protocol changes
🔎 Similar Papers
No similar papers found.
V
Vincenzo Carletti
Department of Computer Information and Electrical Engineering and Applied Mathematics, University of Salerno
P
P. Foggia
Department of Computer Information and Electrical Engineering and Applied Mathematics, University of Salerno
Carlo Mazzocca
Carlo Mazzocca
Assistant Professor (Tenure Track), University of Salerno
CybersecurityDigital IdentityFederated LearningBlockchain
Giuseppe Parrella
Giuseppe Parrella
Department of Computer Information and Electrical Engineering and Applied Mathematics, University of Salerno
M
M. Vento
Department of Computer Information and Electrical Engineering and Applied Mathematics, University of Salerno