Privacy-Preserving Model Transcription With Differentially Private Synthetic Distillation.

📅 2026-01-27
🏛️ IEEE Transactions on Pattern Analysis and Machine Intelligence
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the privacy risks in deep learning model deployment arising from potential leakage of training data or labels by proposing a data-free differentially private model distillation method. It introduces, for the first time, a synergistic adversarial optimization framework that integrates differential privacy with data-free knowledge distillation through a teacher–student–generator triad. Within this framework, a trainable generator synthesizes private surrogate data to facilitate distillation, enabling the student model to closely approximate the teacher’s performance under rigorous privacy guarantees. Theoretical analysis establishes the convergence of the proposed algorithm, and empirical evaluations demonstrate that the resulting student models achieve high accuracy while providing strong privacy protection. Moreover, the synthesized data effectively support downstream tasks, with the overall approach outperforming 26 existing methods.

Technology Category

Application Category

📝 Abstract
While many deep learning models trained on private datasets have been deployed in various practical tasks, they may pose a privacy leakage risk as attackers could recover informative data or label knowledge from models. In this work, we present privacy-preserving model transcription, a data-free model-to-model conversion solution to facilitate model deployment with a privacy guarantee. To this end, we propose a cooperative-competitive learning approach termed differentially private synthetic distillation that learns to convert a pretrained model (teacher) into its privacy-preserving counterpart (student) via a trainable generator without access to private data. The learning collaborates with three players in a unified framework and performs alternate optimization: i) the generator is learned to generate synthetic data, ii) the teacher and student accept the synthetic data and compute differential private labels by flexible data or label noisy perturbation, and iii) the student is updated with noisy labels and the generator is updated by taking the student as a discriminator for adversarial training. We theoretically prove that our approach can guarantee differential privacy and convergence. The transcribed student has good performance and privacy protection, while the resulting generator can generate private synthetic data for downstream tasks. Extensive experiments clearly demonstrate that our approach outperforms 26 state-of-the-arts.
Problem

Research questions and friction points this paper is trying to address.

privacy-preserving model transcription
differential privacy
model distillation
synthetic data
private data protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

privacy-preserving model transcription
differentially private synthetic distillation
data-free knowledge distillation
adversarial generator training
differential privacy
🔎 Similar Papers
No similar papers found.