🤖 AI Summary
This work establishes the first provable privacy vulnerability theory for trained two-layer ReLU neural networks under the implicit bias-driven framework, systematically analyzing data reconstruction and membership inference attacks. Method: Integrating implicit bias theory, geometric analysis of ReLU networks, and probabilistic reasoning, our approach reconstructs a subset containing a constant fraction of true training samples in the univariate setting, and achieves accurate membership determination with high probability in high dimensions. Contribution/Results: Moving beyond prior empirical analyses, we provide the first rigorous theoretical characterization—and provable guarantees—of privacy risks inherent in shallow network training. Crucially, we demonstrate that implicit regularization itself constitutes a privacy leakage pathway, revealing a fundamental privacy–generalization trade-off. Our framework delivers novel theoretical insights and essential analytical tools for the foundations of deep learning privacy.
📝 Abstract
We study what provable privacy attacks can be shown on trained, 2-layer ReLU neural networks. We explore two types of attacks; data reconstruction attacks, and membership inference attacks. We prove that theoretical results on the implicit bias of 2-layer neural networks can be used to provably reconstruct a set of which at least a constant fraction are training points in a univariate setting, and can also be used to identify with high probability whether a given point was used in the training set in a high dimensional setting. To the best of our knowledge, our work is the first to show provable vulnerabilities in this implicit-bias-driven setting.