On the Impact of Uncertainty and Calibration on Likelihood-Ratio Membership Inference Attacks

📅 2024-02-16
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically analyzes, within an information-theoretic framework, how aleatoric and epistemic uncertainty—as well as model miscalibration—affect the efficacy of likelihood-ratio-based membership inference attacks (LiRA). First, it disentangles the independent impacts of these two uncertainty types and calibration error, deriving tight theoretical bounds on attack advantage under three disclosure modes: confidence vectors (CV), true-label confidence (TLC), and decision sets (DS). The analysis reveals that poor calibration substantially amplifies privacy risk, and the derived bounds are both tight and interpretable. Empirical validation—using likelihood-ratio testing, conformal prediction, and uncertainty quantification—confirms that the theoretical bounds accurately predict actual attack performance. This study provides the first quantification tool for LiRA privacy risk that is both theoretically rigorous and practically interpretable.

Technology Category

Application Category

📝 Abstract
In a membership inference attack (MIA), an attacker exploits the overconfidence exhibited by typical machine learning models to determine whether a specific data point was used to train a target model. In this paper, we analyze the performance of the likelihood ratio attack (LiRA) within an information-theoretical framework that allows the investigation of the impact of the aleatoric uncertainty in the true data generation process, of the epistemic uncertainty caused by a limited training data set, and of the calibration level of the target model. We compare three different settings, in which the attacker receives decreasingly informative feedback from the target model: confidence vector (CV) disclosure, in which the output probability vector is released; true label confidence (TLC) disclosure, in which only the probability assigned to the true label is made available by the model; and decision set (DS) disclosure, in which an adaptive prediction set is produced as in conformal prediction. We derive bounds on the advantage of an MIA adversary with the aim of offering insights into the impact of uncertainty and calibration on the effectiveness of MIAs. Simulation results demonstrate that the derived analytical bounds predict well the effectiveness of MIAs.
Problem

Research questions and friction points this paper is trying to address.

Analyzes impact of uncertainty and calibration on membership inference attacks.
Compares effectiveness of attacks under different model feedback settings.
Derives bounds to predict effectiveness of likelihood ratio attacks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes likelihood ratio attack using information theory
Compares three model feedback disclosure settings
Derives bounds on membership inference attack effectiveness
🔎 Similar Papers
No similar papers found.
M
Meiyi Zhu
Beijing Key Laboratory of Network System Architecture and Convergence, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Caili Guo
Caili Guo
Beijing University of Posts and Telecommunications
wireless communicationcognitive radiostatistical signal processingsocial multimedia computingbig data processing,vehic
C
Chunyan Feng
Beijing Key Laboratory of Network System Architecture and Convergence, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
Osvaldo Simeone
Osvaldo Simeone
King's College London
Information theorymachine learningquantum information processingwireless systems