LeakBoost: Perceptual-Loss-Based Membership Inference Attack

๐Ÿ“… 2026-02-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitations of existing membership inference attacks, which rely on static metrics and struggle to effectively expose privacy leakage in model training data. The authors propose LeakBoost, a novel framework that introduces, for the first time, a perceptual lossโ€“based active querying mechanism. By optimizing synthetic images to dynamically amplify representational differences between members and non-members within the modelโ€™s internal activations, LeakBoost enables highly effective inference without requiring modifications to existing detectors. The approach synergistically combines activation space analysis with gradient-driven image synthesis, substantially enhancing privacy risk assessment under white-box settings. Extensive experiments across multiple image classification datasets and network architectures demonstrate strong performance, achieving AUC scores of 0.81โ€“0.88 and improving true positive rates by over an order of magnitude at a 1% false positive rate.

Technology Category

Application Category

๐Ÿ“ Abstract
Membership inference attacks (MIAs) aim to determine whether a sample was part of a model's training set, posing serious privacy risks for modern machine-learning systems. Existing MIAs primarily rely on static indicators, such as loss or confidence, and do not fully leverage the dynamic behavior of models when actively probed. We propose LeakBoost, a perceptual-loss-based interrogation framework that actively probes a model's internal representations to expose hidden membership signals. Given a candidate input, LeakBoost synthesizes an interrogation image by optimizing a perceptual (activation-space) objective, amplifying representational differences between members and non-members. This image is then analyzed by an off-the-shelf membership detector, without modifying the detector itself. When combined with existing membership inference methods, LeakBoost achieves substantial improvements at low false-positive rates across multiple image classification datasets and diverse neural network architectures. In particular, it raises AUC from near-chance levels (0.53-0.62) to 0.81-0.88, and increases TPR at 1 percent FPR by over an order of magnitude compared to strong baseline attacks. A detailed sensitivity analysis reveals that deeper layers and short, low-learning-rate optimization produce the strongest leakage, and that improvements concentrate in gradient-based detectors. LeakBoost thus offers a modular and computationally efficient way to assess privacy risks in white-box settings, advancing the study of dynamic membership inference.
Problem

Research questions and friction points this paper is trying to address.

membership inference attack
privacy risk
training data exposure
model privacy
white-box setting
Innovation

Methods, ideas, or system contributions that make the work stand out.

membership inference attack
perceptual loss
active probing
representation leakage
white-box privacy
๐Ÿ”Ž Similar Papers
No similar papers found.
A
Amit Kravchik Taub
Ben-Gurion University, Israel
F
Fred M. Grabovski
Ben-Gurion University, Israel
Guy Amit
Guy Amit
Senior Researcher, KI - The Israeli Institute for Applied Research in Computational Health
Computational HealthMachine LearningBig DataMedical ImagingBiomedical Signal Processing
Y
Yisroel Mirsky
Ben-Gurion University, Israel