๐ค AI Summary
This work addresses the limitations of existing membership inference attacks, which rely on static metrics and struggle to effectively expose privacy leakage in model training data. The authors propose LeakBoost, a novel framework that introduces, for the first time, a perceptual lossโbased active querying mechanism. By optimizing synthetic images to dynamically amplify representational differences between members and non-members within the modelโs internal activations, LeakBoost enables highly effective inference without requiring modifications to existing detectors. The approach synergistically combines activation space analysis with gradient-driven image synthesis, substantially enhancing privacy risk assessment under white-box settings. Extensive experiments across multiple image classification datasets and network architectures demonstrate strong performance, achieving AUC scores of 0.81โ0.88 and improving true positive rates by over an order of magnitude at a 1% false positive rate.
๐ Abstract
Membership inference attacks (MIAs) aim to determine whether a sample was part of a model's training set, posing serious privacy risks for modern machine-learning systems. Existing MIAs primarily rely on static indicators, such as loss or confidence, and do not fully leverage the dynamic behavior of models when actively probed. We propose LeakBoost, a perceptual-loss-based interrogation framework that actively probes a model's internal representations to expose hidden membership signals. Given a candidate input, LeakBoost synthesizes an interrogation image by optimizing a perceptual (activation-space) objective, amplifying representational differences between members and non-members. This image is then analyzed by an off-the-shelf membership detector, without modifying the detector itself. When combined with existing membership inference methods, LeakBoost achieves substantial improvements at low false-positive rates across multiple image classification datasets and diverse neural network architectures. In particular, it raises AUC from near-chance levels (0.53-0.62) to 0.81-0.88, and increases TPR at 1 percent FPR by over an order of magnitude compared to strong baseline attacks. A detailed sensitivity analysis reveals that deeper layers and short, low-learning-rate optimization produce the strongest leakage, and that improvements concentrate in gradient-based detectors. LeakBoost thus offers a modular and computationally efficient way to assess privacy risks in white-box settings, advancing the study of dynamic membership inference.