Noise-Aware Differentially Private Regression via Meta-Learning

📅 2024-06-12
🏛️ Neural Information Processing Systems
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the low accuracy, poor calibration, and hyperparameter sensitivity of differentially private (DP) regression models in high-stakes settings, this paper proposes DPConvCNP—a novel framework that integrates meta-learning with a refined functional differential privacy mechanism (building upon Hall et al., 2013) to enable one-shot mapping from private data to DP prediction models. Leveraging Convolutional Conditional Neural Processes (ConvCNPs) as its backbone, DPConvCNP supports non-Gaussian data modeling. Under strict ε-DP guarantees, it substantially outperforms tuned DP Gaussian process baselines in both predictive accuracy and calibration quality, achieves a 10× speedup in inference, and exhibits markedly reduced sensitivity to hyperparameters.

Technology Category

Application Category

📝 Abstract
Many high-stakes applications require machine learning models that protect user privacy and provide well-calibrated, accurate predictions. While Differential Privacy (DP) is the gold standard for protecting user privacy, standard DP mechanisms typically significantly impair performance. One approach to mitigating this issue is pre-training models on simulated data before DP learning on the private data. In this work we go a step further, using simulated data to train a meta-learning model that combines the Convolutional Conditional Neural Process (ConvCNP) with an improved functional DP mechanism of Hall et al. [2013] yielding the DPConvCNP. DPConvCNP learns from simulated data how to map private data to a DP predictive model in one forward pass, and then provides accurate, well-calibrated predictions. We compare DPConvCNP with a DP Gaussian Process (GP) baseline with carefully tuned hyperparameters. The DPConvCNP outperforms the GP baseline, especially on non-Gaussian data, yet is much faster at test time and requires less tuning.
Problem

Research questions and friction points this paper is trying to address.

Protecting user privacy in machine learning models
Improving accuracy and calibration in DP predictions
Reducing performance impairment from standard DP mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Meta-learning with simulated data for privacy
Combining ConvCNP with functional DP mechanism
One-forward-pass DP predictive model training
🔎 Similar Papers
No similar papers found.