Empirical Comparison of Membership Inference Attacks in Deep Transfer Learning

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Systematic evaluation of membership inference attacks (MIAs) in deep transfer learning remains lacking. Method: This paper conducts the first empirical comparison of multiple MIA approaches—including score-based attacks, the Likelihood Ratio Attack (LiRA), and the Inverse Hessian Attack (IHA)—within a unified experimental framework, varying data scale, fine-tuning strategies, and data distribution shifts. Contribution/Results: No single attack universally exposes all privacy risks; LiRA demonstrates superior robustness across most settings; IHA significantly outperforms alternatives on large-scale data and specific datasets; score-based attacks degrade markedly with increasing training data. These findings yield a reproducible, scenario-adapted practical guide for privacy risk assessment in transfer learning models, filling a critical gap in systematic benchmarking for this domain.

Technology Category

Application Category

📝 Abstract
With the emergence of powerful large-scale foundation models, the training paradigm is increasingly shifting from from-scratch training to transfer learning. This enables high utility training with small, domain-specific datasets typical in sensitive applications.Membership inference attacks (MIAs) provide an empirical estimate of the privacy leakage by machine learning models. Yet, prior assessments of MIAs against models fine-tuned with transfer learning rely on a small subset of possible attacks. We address this by comparing performance of diverse MIAs in transfer learning settings to help practitioners identify the most efficient attacks for privacy risk evaluation. We find that attack efficacy decreases with the increase in training data for score-based MIAs. We find that there is no one MIA which captures all privacy risks in models trained with transfer learning. While the Likelihood Ratio Attack (LiRA) demonstrates superior performance across most experimental scenarios, the Inverse Hessian Attack (IHA) proves to be more effective against models fine-tuned on PatchCamelyon dataset in high data regime.
Problem

Research questions and friction points this paper is trying to address.

Evaluating privacy risks in deep transfer learning models
Comparing diverse membership inference attacks effectiveness
Identifying optimal attacks for different data regimes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compares diverse membership inference attacks
Evaluates attack efficacy with varying training data
Identifies optimal attacks for specific datasets
🔎 Similar Papers
No similar papers found.