PrivCode: When Code Generation Meets Differential Privacy

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the risks of sensitive data leakage in code generation and the challenges of balancing syntactic constraints with privacy–utility trade-offs, this paper proposes PrivCode—the first differential privacy (DP) framework for synthesizing code datasets. PrivCode adopts a two-stage design: (1) a DP-SGD-trained model infused with syntactic priors to sanitize sensitive information, and (2) large language model–based refinement of synthetic code guided by syntactic correctness and functional fidelity. PrivCode is the first to systematically integrate rigorous DP mechanisms into code generation. Evaluated on four mainstream LLMs across varying privacy budgets (ε = 1–8), it consistently improves syntactic correctness, functional accuracy, and executability of generated code. Empirical analysis further confirms effective suppression of training data memorization. Thus, PrivCode achieves a principled balance between strong formal privacy guarantees and high-generation quality.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have presented outstanding performance in code generation and completion. However, fine-tuning these models on private datasets can raise privacy and proprietary concerns, such as the leakage of sensitive personal information. Differentially private (DP) code generation provides theoretical guarantees for protecting sensitive code by generating synthetic datasets that preserve statistical properties while reducing privacy leakage concerns. However, DP code generation faces significant challenges due to the strict syntactic dependencies and the privacy-utility trade-off. We propose PrivCode, the first DP synthesizer specifically designed for code datasets. It incorporates a two-stage framework to improve both privacy and utility. In the first stage, termed "privacy-sanitizing", PrivCode generates DP-compliant synthetic code by training models using DP-SGD while introducing syntactic information to preserve code structure. The second stage, termed "utility-boosting", fine-tunes a larger pre-trained LLM on the synthetic privacy-free code to mitigate the utility loss caused by DP, enhancing the utility of the generated code. Extensive experiments on four LLMs show that PrivCode generates higher-utility code across various testing tasks under four benchmarks. The experiments also confirm its ability to protect sensitive data under varying privacy budgets. We provide the replication package at the anonymous link.
Problem

Research questions and friction points this paper is trying to address.

Develops a differentially private code generation framework
Addresses privacy-utility trade-off in synthetic code datasets
Protects sensitive data while preserving code structure and utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage DP framework for code synthesis
DP-SGD training with syntactic structure preservation
Fine-tuning LLMs on synthetic privacy-free code