🤖 AI Summary
Existing membership inference attacks (MIAs) against large language models (LLMs) achieve only marginal improvements over random guessing, leading the research community to underestimate their privacy risks. This limitation arises because conventional loss-based MIAs overlook membership signals embedded in internal model representations. Method: We propose memTrace, the first framework to systematically extract “memory fragments” from LLMs’ hidden-layer activations, attention distributions, and cross-layer representation dynamics. memTrace models inter-layer transition patterns and employs multi-granularity representation learning for fine-grained membership inference. Contribution/Results: Experiments across multiple mainstream LLMs show that memTrace achieves an average AUC of 0.85—significantly outperforming state-of-the-art baselines. Our results demonstrate that LLM internal states retain exploitable traces of training data membership, revealing previously underestimated privacy vulnerabilities and establishing a new paradigm for rigorous LLM privacy risk assessment.
📝 Abstract
Membership inference attacks (MIAs) reveal whether specific data was used to train machine learning models, serving as important tools for privacy auditing and compliance assessment. Recent studies have reported that MIAs perform only marginally better than random guessing against large language models, suggesting that modern pre-training approaches with massive datasets may be free from privacy leakage risks. Our work offers a complementary perspective to these findings by exploring how examining LLMs' internal representations, rather than just their outputs, may provide additional insights into potential membership inference signals. Our framework, emph{memTrace}, follows what we call enquote{neural breadcrumbs} extracting informative signals from transformer hidden states and attention patterns as they process candidate sequences. By analyzing layer-wise representation dynamics, attention distribution characteristics, and cross-layer transition patterns, we detect potential memorization fingerprints that traditional loss-based approaches may not capture. This approach yields strong membership detection across several model families achieving average AUC scores of 0.85 on popular MIA benchmarks. Our findings suggest that internal model behaviors can reveal aspects of training data exposure even when output-based signals appear protected, highlighting the need for further research into membership privacy and the development of more robust privacy-preserving training techniques for large language models.