SOMP: Scalable Gradient Inversion for Large Language Models via Subspace-Guided Orthogonal Matching Pursuit

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of severe signal aliasing, high computational cost, and low fidelity in reconstructing private training text from aggregated gradients under large-batch, long-sequence settings. Framing the problem as sparse signal recovery, it reveals—for the first time—the geometric structure and sample-level sparsity embedded in the head dimension of Transformer gradients. Building on this insight, the authors propose Subspace-guided Orthogonal Matching Pursuit (SOMP), which efficiently disentangles mixed signals without exhaustive search. The method substantially outperforms existing approaches across multiple large language models, varying scales, and five languages: it achieves markedly higher reconstruction fidelity at batch size B=16 and remains effective even under extreme aggregation conditions (B=128), recovering semantically coherent text.

Technology Category

Application Category

📝 Abstract
Gradient inversion attacks reveal that private training text can be reconstructed from shared gradients, posing a privacy risk to large language models (LLMs). While prior methods perform well in small-batch settings, scaling to larger batch sizes and longer sequences remains challenging due to severe signal mixing, high computational cost, and degraded fidelity. We present SOMP (Subspace-Guided Orthogonal Matching Pursuit), a scalable gradient inversion framework that casts text recovery from aggregated gradients as a sparse signal recovery problem. Our key insight is that aggregated transformer gradients retain exploitable head-wise geometric structure together with sample-level sparsity. SOMP leverages these properties to progressively narrow the search space and disentangle mixed signals without exhaustive search. Experiments across multiple LLM families, model scales, and five languages show that SOMP consistently outperforms prior methods in the aggregated-gradient regime.For long sequences at batch size B=16, SOMP achieves substantially higher reconstruction fidelity than strong baselines, while remaining computationally competitive. Even under extreme aggregation (up to B=128), SOMP still recovers meaningful text, suggesting that privacy leakage can persist in regimes where prior attacks become much less effective.
Problem

Research questions and friction points this paper is trying to address.

gradient inversion
large language models
privacy leakage
aggregated gradients
signal mixing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient Inversion
Sparse Signal Recovery
Orthogonal Matching Pursuit
Large Language Models
Privacy Attack
🔎 Similar Papers
No similar papers found.
Yibo Li
Yibo Li
National University of Singapore
LLM
Q
Qiongxiu Li
Aalborg University