Do You Really Need Public Data? Surrogate Public Data for Differential Privacy on Tabular Data

📅 2025-04-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Differential privacy (DP) table learning suffers from the scarcity of trustworthy public data for pretraining, hyperparameter tuning, and utility–privacy trade-off evaluation. Method: This paper introduces the “proxy public data” paradigm—generating high-fidelity synthetic tabular data without accessing any sensitive records, leveraging only publicly available metadata (e.g., field names, data types, value ranges). We formally define this concept and propose two large language model–driven generation strategies: (i) direct CSV record synthesis and (ii) automated structural causal model (SCM) construction followed by sampling. Contribution/Results: Experiments demonstrate that proxy public data matches real public data in downstream tasks—including DP classifier pretraining, DP synthesizer hyperparameter optimization, and privacy–utility assessment—while incurring zero privacy budget cost. It significantly improves tuning efficiency and evaluation robustness, enabling rigorous, privacy-preserving development of DP tabular learning systems.

Technology Category

Application Category

📝 Abstract
Differentially private (DP) machine learning often relies on the availability of public data for tasks like privacy-utility trade-off estimation, hyperparameter tuning, and pretraining. While public data assumptions may be reasonable in text and image domains, they are less likely to hold for tabular data due to tabular data heterogeneity across domains. We propose leveraging powerful priors to address this limitation; specifically, we synthesize realistic tabular data directly from schema-level specifications - such as variable names, types, and permissible ranges - without ever accessing sensitive records. To that end, this work introduces the notion of"surrogate"public data - datasets generated independently of sensitive data, which consume no privacy loss budget and are constructed solely from publicly available schema or metadata. Surrogate public data are intended to encode plausible statistical assumptions (informed by publicly available information) into a dataset with many downstream uses in private mechanisms. We automate the process of generating surrogate public data with large language models (LLMs); in particular, we propose two methods: direct record generation as CSV files, and automated structural causal model (SCM) construction for sampling records. Through extensive experiments, we demonstrate that surrogate public tabular data can effectively replace traditional public data when pretraining differentially private tabular classifiers. To a lesser extent, surrogate public data are also useful for hyperparameter tuning of DP synthetic data generators, and for estimating the privacy-utility tradeoff.
Problem

Research questions and friction points this paper is trying to address.

Addressing lack of public tabular data for DP machine learning
Generating surrogate public data from schema-level specifications
Using surrogate data for privacy-utility tradeoff and pretraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates surrogate public tabular data from schema
Uses LLMs for automated data generation
Enables privacy-preserving pretraining without sensitive data
🔎 Similar Papers