When Better Features Mean Greater Risks: The Performance-Privacy Trade-Off in Contrastive Learning

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work uncovers an intrinsic trade-off between improved model performance and heightened membership privacy leakage in contrastive learning encoders. Addressing key limitations of existing membership inference attacks (MIAs)—namely, their reliance on labels or gradients and poor robustness—we propose the Embedding Lp-Norm Likelihood Attack (LpLA), a label- and gradient-free MIA. LpLA is the first to empirically establish a positive correlation between encoder architectural complexity and privacy leakage intensity, and it models membership likelihood based on the statistical properties of p-norms of learned embedding vectors. Extensive experiments across multiple datasets and encoder architectures demonstrate that LpLA significantly outperforms state-of-the-art MIAs under low-query-budget and weak-prior conditions. Our findings introduce a novel dimension—embedding norm statistics—for encoder privacy risk assessment and provide empirical grounding for understanding privacy-performance trade-offs in self-supervised representation learning.

Technology Category

Application Category

📝 Abstract
With the rapid advancement of deep learning technology, pre-trained encoder models have demonstrated exceptional feature extraction capabilities, playing a pivotal role in the research and application of deep learning. However, their widespread use has raised significant concerns about the risk of training data privacy leakage. This paper systematically investigates the privacy threats posed by membership inference attacks (MIAs) targeting encoder models, focusing on contrastive learning frameworks. Through experimental analysis, we reveal the significant impact of model architecture complexity on membership privacy leakage: As more advanced encoder frameworks improve feature-extraction performance, they simultaneously exacerbate privacy-leakage risks. Furthermore, this paper proposes a novel membership inference attack method based on the p-norm of feature vectors, termed the Embedding Lp-Norm Likelihood Attack (LpLA). This method infers membership status, by leveraging the statistical distribution characteristics of the p-norm of feature vectors. Experimental results across multiple datasets and model architectures demonstrate that LpLA outperforms existing methods in attack performance and robustness, particularly under limited attack knowledge and query volumes. This study not only uncovers the potential risks of privacy leakage in contrastive learning frameworks, but also provides a practical basis for privacy protection research in encoder models. We hope that this work will draw greater attention to the privacy risks associated with self-supervised learning models and shed light on the importance of a balance between model utility and training data privacy. Our code is publicly available at: https://github.com/SeroneySun/LpLA_code.
Problem

Research questions and friction points this paper is trying to address.

Investigates privacy risks from membership inference attacks in contrastive learning
Reveals trade-off between feature extraction performance and privacy leakage
Proposes LpLA attack method using p-norm of feature vectors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes Embedding Lp-Norm Likelihood Attack (LpLA)
Analyzes privacy risks in contrastive learning frameworks
Balances model utility and training data privacy
🔎 Similar Papers
No similar papers found.
R
Ruining Sun
School of Mathematics and Computational Science, Xiangtan University, Xiangtan, Hunan, China
Hongsheng Hu
Hongsheng Hu
Lecturer, School of Information and Physical Sciences, University of Newcastle
Trustworthy Machine LearningMachine Unlearning
W
Wei Luo
School of Information Technology, Deakin University, Burwood, VIC, Australia
Z
Zhaoxi Zhang
School of Computer Science, University of Technology Sydney, Ultimo, NSW, Australia
Yanjun Zhang
Yanjun Zhang
Lecturer, University of Technology Sydney
Security and PrivacyMachine Learning
H
Haizhuan Yuan
School of Mathematics and Computational Science, Xiangtan University, Xiangtan, Hunan, China
L
Leo Yu Zhang
School of Information and Communication Technology, Griffith University, Southport, QLD, Australia