What is the Cost of Differential Privacy for Deep Learning-Based Trajectory Generation?

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the impact of differential privacy (DP) on the utility of deep trajectory generation models—including diffusion models, VAEs, and GANs—and characterizes the privacy–utility trade-off. To address the incompatibility of DP-SGD with conditional generation, we propose the first DP mechanism supporting conditional trajectory synthesis. Empirical analysis reveals a reversal in model ranking under DP constraints: GANs surpass diffusion models in both stability and generation quality when trained with DP-SGD—challenging the implicit assumption that “non-private optimality implies private optimality.” We introduce a multi-dimensional trajectory utility evaluation framework and demonstrate that, although DP-SGD significantly degrades utility, it remains practically viable in large-data regimes. Our core contributions are: (1) the first conditional DP generation mechanism; (2) empirical evidence that DP adaptability critically influences architectural selection; and (3) a cross-architecture, reproducible benchmark for DP-enabled trajectory generation.

Technology Category

Application Category

📝 Abstract
While location trajectories offer valuable insights, they also reveal sensitive personal information. Differential Privacy (DP) offers formal protection, but achieving a favourable utility-privacy trade-off remains challenging. Recent works explore deep learning-based generative models to produce synthetic trajectories. However, current models lack formal privacy guarantees and rely on conditional information derived from real data during generation. This work investigates the utility cost of enforcing DP in such models, addressing three research questions across two datasets and eleven utility metrics. (1) We evaluate how DP-SGD, the standard DP training method for deep learning, affects the utility of state-of-the-art generative models. (2) Since DP-SGD is limited to unconditional models, we propose a novel DP mechanism for conditional generation that provides formal guarantees and assess its impact on utility. (3) We analyse how model types - Diffusion, VAE, and GAN - affect the utility-privacy trade-off. Our results show that DP-SGD significantly impacts performance, although some utility remains if the datasets is sufficiently large. The proposed DP mechanism improves training stability, particularly when combined with DP-SGD, for unstable models such as GANs and on smaller datasets. Diffusion models yield the best utility without guarantees, but with DP-SGD, GANs perform best, indicating that the best non-private model is not necessarily optimal when targeting formal guarantees. In conclusion, DP trajectory generation remains a challenging task, and formal guarantees are currently only feasible with large datasets and in constrained use cases.
Problem

Research questions and friction points this paper is trying to address.

Evaluating utility cost of enforcing Differential Privacy in deep learning-based trajectory generation
Proposing a novel DP mechanism for conditional generation with formal guarantees
Analyzing impact of model types on utility-privacy trade-off in DP settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

DP-SGD for deep learning privacy guarantees
Novel DP mechanism for conditional generation
Comparison of Diffusion, VAE, GAN models
🔎 Similar Papers
No similar papers found.
E
Erik Buchholz
University of New South Wales, CSIRO’s Data61, Cyber Security CRC
Natasha Fernandes
Natasha Fernandes
Macquarie University
Differential PrivacyFormal MethodsQuantitative Information Flow
David D. Nguyen
David D. Nguyen
UNSW Sydney, Data61
Deep Learning
A
Alsharif Abuadbba
CSIRO’s Data61, Cyber Security CRC
Surya Nepal
Surya Nepal
CSIRO’s Data61, Australia
cyber securitydata privacydistributed systems
S
Salil S. Kanhere
University of New South Wales