🤖 AI Summary
To address sensitive data leakage during model training, this paper proposes a novel key-protection paradigm grounded in posterior reconstruction probability constraints, overcoming the stringent utility–privacy trade-off inherent in conventional differential privacy (DP). Methodologically, it formalizes secret protection as an upper bound on computable posterior reconstruction probability; introduces a weight-guided Poisson sampling mechanism enabling fine-grained, customizable protection intensity at the key level; and establishes a linear programming framework to jointly optimize example weights and sampling strategies, explicitly modeling the privacy–utility trade-off. Evaluated on multiple benchmark tasks, the approach achieves up to 12.7% higher accuracy than DP-SGD under equivalent privacy guarantees, demonstrating the feasibility of lightweight, efficient, and practical key-level protection.
📝 Abstract
We consider the problem of secret protection, in which a business or organization wishes to train a model on their own data, while attempting to not leak secrets potentially contained in that data via the model. The standard method for training models to avoid memorization of secret information is via differential privacy (DP). However, DP requires a large loss in utility or a large dataset to achieve its strict privacy definition, which may be unnecessary in our setting where the data curator and data owner are the same entity. We propose an alternate definition of secret protection that instead of targeting DP, instead targets a bound on the posterior probability of secret reconstruction. We then propose and empirically evaluate an algorithm for model training with this secret protection definition. Our algorithm solves a linear program to assign weights to examples based on the desired per-secret protections, and then performs Poisson sampling using these weights. We show our algorithm significantly outperforms the baseline of running DP-SGD on the whole dataset.