Learning the Signature of Memorization in Autoregressive Language Models

📅 2026-04-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalizability of traditional membership inference attacks, which rely on handcrafted heuristics. The authors propose the first transferable, end-to-end learning-based attack that reformulates membership inference as a sequence classification task grounded in per-token probability distribution statistics. Their method leverages naturally occurring labeled data from model fine-tuning to train a classifier without requiring shadow models. A key insight is that fine-tuned language models exhibit architecture-agnostic memorization signatures, enabling zero-shot transfer across diverse model families—including Transformer, Mamba, RWKV-4, and RecurrentGemma. The approach achieves AUC scores of 0.936–0.972 on unseen architecture–dataset combinations, yielding a 2.8× improvement in true positive rate at a 0.1% false positive rate over the strongest baseline, and successfully generalizes to code domains with an AUC of 0.865.
📝 Abstract
All prior membership inference attacks for fine-tuned language models use hand-crafted heuristics (e.g., loss thresholding, Min-K\%, reference calibration), each bounded by the designer's intuition. We introduce the first transferable learned attack, enabled by the observation that fine-tuning any model on any corpus yields unlimited labeled data, since membership is known by construction. This removes the shadow model bottleneck and brings membership inference into the deep learning era: learning what matters rather than designing it, with generalization through training diversity and scale. We discover that fine-tuning language models produces an invariant signature of memorization detectable across architectural families and data domains. We train a membership inference classifier exclusively on transformer-based models. It transfers zero-shot to Mamba (state-space), RWKV-4 (linear attention), and RecurrentGemma (gated recurrence), achieving 0.963, 0.972, and 0.936 AUC respectively. Each evaluation combines an architecture and dataset never seen during training, yet all three exceed performance on held-out transformers (0.908 AUC). These four families share no computational mechanisms, their only commonality is gradient descent on cross-entropy loss. Even simple likelihood-based methods exhibit strong transfer, confirming the signature exists independently of the detection method. Our method, Learned Transfer MIA (LT-MIA), captures this signal most effectively by reframing membership inference as sequence classification over per-token distributional statistics. On transformers, LT-MIA achieves 2.8$\times$ higher TPR at 0.1\% FPR than the strongest baseline. The method also transfers to code (0.865 AUC) despite training only on natural language texts. Code and trained classifier available at https://github.com/JetBrains-Research/learned-mia.
Problem

Research questions and friction points this paper is trying to address.

membership inference
language models
memorization
transfer learning
autoregressive models
Innovation

Methods, ideas, or system contributions that make the work stand out.

membership inference
transferable attack
memorization signature
deep learning-based MIA
zero-shot transfer
🔎 Similar Papers
No similar papers found.
D
David Ilić
JetBrains Research
Kostadin Cvejoski
Kostadin Cvejoski
JetBrains
LLMsDeep LearningPoint ProcessesDynamic Language Models
D
David Stanojević
JetBrains Research
E
Evgeny Grigorenko
JetBrains Research