🤖 AI Summary
To address the risk of inferring sensitive attributes during feature engineering on privacy-sensitive data, this paper formalizes the Privacy-Preserving Data Reprogramming (PPDR) task: jointly optimizing target attribute prediction accuracy while minimizing the predictability of sensitive attributes. We propose a two-stage variational disentanglement framework. In Stage I, policy-guided reinforcement learning optimizes utility-oriented feature transformations. In Stage II, a variational LSTM seq2seq model constructs a disentangled latent space separating utility and privacy, augmented by adversarial causal disentanglement regularization to suppress sensitive information. Extensive experiments across eight benchmark datasets demonstrate that our method achieves an average 9.3% improvement in target prediction accuracy and a 35% reduction in sensitive attribute predictability over state-of-the-art approaches, effectively balancing utility and privacy protection.
📝 Abstract
In real-world applications, domain data often contains identifiable or sensitive attributes, is subject to strict regulations (e.g., HIPAA, GDPR), and requires explicit data feature engineering for interpretability and transparency. Existing feature engineering primarily focuses on advancing downstream task performance, often risking privacy leakage. We generalize this learning task under such new requirements as Privacy-Preserving Data Reprogramming (PPDR): given a dataset, transforming features to maximize target attribute prediction accuracy while minimizing sensitive attribute prediction accuracy. PPDR poses challenges for existing systems: 1) generating high-utility feature transformations without being overwhelmed by a large search space, and 2) disentangling and eliminating sensitive information from utility-oriented features to reduce privacy inferability. To tackle these challenges, we propose DELTA, a two-phase variational disentangled generative learning framework. Phase I uses policy-guided reinforcement learning to discover feature transformations with downstream task utility, without any regard to privacy inferability. Phase II employs a variational LSTM seq2seq encoder-decoder with a utility-privacy disentangled latent space design and adversarial-causal disentanglement regularization to suppress privacy signals during feature generation. Experiments on eight datasets show DELTA improves predictive performance by ~9.3% and reduces privacy leakage by ~35%, demonstrating robust, privacy-aware data transformation.