FedSpy-LLM: Towards Scalable and Generalizable Data Reconstruction Attacks from Gradients on LLMs

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing gradient inversion attacks struggle to achieve high-fidelity, scalable, and architecture-agnostic reconstruction of training data from large language models (LLMs) in federated learning settings employing parameter-efficient fine-tuning (PEFT). This work proposes FedSpy-LLM, which introduces a novel gradient decomposition strategy to exploit the rank deficiency and subspace structure inherent in gradients, enabling efficient extraction of salient tokens. Coupled with an iterative alignment mechanism to recover sequence ordering, FedSpy-LLM achieves, for the first time, high-quality reconstruction of batched, long-sequence training data from PEFT-adapted LLMs. The method supports diverse architectures—including encoder-only, decoder-only, and encoder-decoder models—and significantly outperforms existing approaches in both reconstruction fidelity and scalability, thereby exposing critical privacy vulnerabilities in current real-world deployments.
📝 Abstract
Given the growing reliance on private data in training Large Language Models (LLMs), Federated Learning (FL) combined with Parameter-Efficient Fine-Tuning (PEFT) has garnered significant attention for enhancing privacy and efficiency. Despite FL's privacy benefits, prior studies have shown that private data can still be extracted from shared gradients. However, these studies, mainly on full-parameter model training, are limited to reconstructing small batches, short input sequences, and specific model architectures, such as encoder-based or decoder-based models. The reconstruction quality becomes even worse when dealing with gradients from PEFT methods. To fully understand the practical attack surface of federated LLMs, this paper proposes FedSpy-LLM, a scalable and generalizable data reconstruction attack designed to reconstruct training data with larger batch sizes and longer sequences while generalizing across diverse model architectures, even when PEFT methods are deployed for training. At the core of FedSpy-LLM is a novel gradient decomposition strategy that exploits the rank deficiency and subspace structure of gradients, enabling efficient token extraction while preserving key signal components at scale. This approach further mitigates the reconstruction challenges introduced by PEFT's substantial null space, ensuring robustness across encoder-based, decoder-based, and encoder-decoder model architectures. Additionally, by iteratively aligning each token's partial-sequence gradient with the full-sequence gradient, FedSpy-LLM ensures accurate token ordering in reconstructed sequences.
Problem

Research questions and friction points this paper is trying to address.

data reconstruction attack
federated learning
large language models
parameter-efficient fine-tuning
gradient leakage
Innovation

Methods, ideas, or system contributions that make the work stand out.

gradient inversion
federated learning
parameter-efficient fine-tuning
data reconstruction attack
large language models
🔎 Similar Papers
No similar papers found.
S
Syed Irfan Ali Meerza
University of Tennessee, Knoxville, TN, USA
Feiyi Wang
Feiyi Wang
Distinguished Research Scientist & Group Leader, Analytics and AI Methods at Scale, NCCS/ORNL
HPCAI for Science at Scale
J
Jian Liu
University of Georgia, Athens, GA, USA