🤖 AI Summary
This work addresses the limitations of existing membership inference attacks, which rely on global signals such as average loss and consequently fail to capture fine-grained memorization traces, leading to suboptimal performance. To overcome this, the authors propose the Windowed Binary Comparison (WBC) method, which abandons the conventional global averaging paradigm and instead systematically leverages multi-scale local contextual signals. Specifically, WBC employs sliding windows at multiple scales to compare the loss of a target model against that of a reference model, integrating symbolic aggregation with geometrically spaced window ensembles to construct a local binary voting mechanism for identifying training samples. This approach significantly enhances attack sensitivity and accuracy, consistently outperforming current baselines across 11 datasets with notably higher AUC scores and achieving a 2–3× improvement in detection rate under low false-positive regimes.
📝 Abstract
Most membership inference attacks (MIAs) against Large Language Models (LLMs) rely on global signals, like average loss, to identify training data. This approach, however, dilutes the subtle, localized signals of memorization, reducing attack effectiveness. We challenge this global-averaging paradigm, positing that membership signals are more pronounced within localized contexts. We introduce WBC (Window-Based Comparison), which exploits this insight through a sliding window approach with sign-based aggregation. Our method slides windows of varying sizes across text sequences, with each window casting a binary vote on membership based on loss comparisons between target and reference models. By ensembling votes across geometrically spaced window sizes, we capture memorization patterns from token-level artifacts to phrase-level structures. Extensive experiments across eleven datasets demonstrate that WBC substantially outperforms established baselines, achieving higher AUC scores and 2-3 times improvements in detection rates at low false positive thresholds. Our findings reveal that aggregating localized evidence is fundamentally more effective than global averaging, exposing critical privacy vulnerabilities in fine-tuned LLMs.