Neural Breadcrumbs: Membership Inference Attacks on LLMs Through Hidden State and Attention Pattern Analysis

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing membership inference attacks (MIAs) against large language models (LLMs) achieve only marginal improvements over random guessing, leading the research community to underestimate their privacy risks. This limitation arises because conventional loss-based MIAs overlook membership signals embedded in internal model representations. Method: We propose memTrace, the first framework to systematically extract “memory fragments” from LLMs’ hidden-layer activations, attention distributions, and cross-layer representation dynamics. memTrace models inter-layer transition patterns and employs multi-granularity representation learning for fine-grained membership inference. Contribution/Results: Experiments across multiple mainstream LLMs show that memTrace achieves an average AUC of 0.85—significantly outperforming state-of-the-art baselines. Our results demonstrate that LLM internal states retain exploitable traces of training data membership, revealing previously underestimated privacy vulnerabilities and establishing a new paradigm for rigorous LLM privacy risk assessment.

Technology Category

Application Category

📝 Abstract
Membership inference attacks (MIAs) reveal whether specific data was used to train machine learning models, serving as important tools for privacy auditing and compliance assessment. Recent studies have reported that MIAs perform only marginally better than random guessing against large language models, suggesting that modern pre-training approaches with massive datasets may be free from privacy leakage risks. Our work offers a complementary perspective to these findings by exploring how examining LLMs' internal representations, rather than just their outputs, may provide additional insights into potential membership inference signals. Our framework, emph{memTrace}, follows what we call enquote{neural breadcrumbs} extracting informative signals from transformer hidden states and attention patterns as they process candidate sequences. By analyzing layer-wise representation dynamics, attention distribution characteristics, and cross-layer transition patterns, we detect potential memorization fingerprints that traditional loss-based approaches may not capture. This approach yields strong membership detection across several model families achieving average AUC scores of 0.85 on popular MIA benchmarks. Our findings suggest that internal model behaviors can reveal aspects of training data exposure even when output-based signals appear protected, highlighting the need for further research into membership privacy and the development of more robust privacy-preserving training techniques for large language models.
Problem

Research questions and friction points this paper is trying to address.

Membership inference attacks on LLMs using internal representations
Analyzing hidden states and attention patterns for privacy risks
Detecting training data exposure beyond output-based signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing transformer hidden states for membership inference
Extracting signals from attention patterns and layer dynamics
Detecting memorization fingerprints beyond traditional loss methods
🔎 Similar Papers
No similar papers found.