Privacy Amplification by Structured Subsampling for Deep Differentially Private Time Series Forecasting

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the failure of privacy guarantees in differentially private stochastic gradient descent (DP-SGD) for time-series forecasting—caused by its neglect of temporal structure. We propose the first differential privacy theoretical framework tailored to structured time-series batch sampling. By rigorously characterizing the privacy amplification effect of temporal subsampling, we establish the first tight event-level and user-level privacy bounds (ε ≤ 4). Furthermore, we integrate self-supervised data augmentation to substantially improve privacy-budget utilization efficiency. Our method encompasses temporal batch construction, DP-SGD adaptation, and formal privacy amplification derivation. Empirical evaluation on real-world datasets—including hospital admission rates and mobile trajectory prediction—demonstrates a 12.7% improvement in forecasting accuracy over baseline methods, achieving a substantial advance in the privacy–utility trade-off.

Technology Category

Application Category

📝 Abstract
Many forms of sensitive data, such as web traffic, mobility data, or hospital occupancy, are inherently sequential. The standard method for training machine learning models while ensuring privacy for units of sensitive information, such as individual hospital visits, is differentially private stochastic gradient descent (DP-SGD). However, we observe in this work that the formal guarantees of DP-SGD are incompatible with timeseries-specific tasks like forecasting, since they rely on the privacy amplification attained by training on small, unstructured batches sampled from an unstructured dataset. In contrast, batches for forecasting are generated by (1) sampling sequentially structured time series from a dataset, (2) sampling contiguous subsequences from these series, and (3) partitioning them into context and ground-truth forecast windows. We theoretically analyze the privacy amplification attained by this structured subsampling to enable the training of forecasting models with sound and tight event- and user-level privacy guarantees. Towards more private models, we additionally prove how data augmentation amplifies privacy in self-supervised training of sequence models. Our empirical evaluation demonstrates that amplification by structured subsampling enables the training of forecasting models with strong formal privacy guarantees.
Problem

Research questions and friction points this paper is trying to address.

Differentially private time series forecasting
Structured subsampling for privacy amplification
Privacy guarantees in sequential data analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured subsampling for privacy
Time series forecasting models
Data augmentation amplifies privacy
🔎 Similar Papers
No similar papers found.