🤖 AI Summary
To address privacy leakage in distributed time-series data under local differential privacy (LDP), where fixed-window noise can be inadvertently canceled out during aggregation, this paper proposes a collaborative LDP mechanism. The method enables multiple users to jointly generate and allocate structured noise vectors, ensuring that individual perturbations cancel *directionally*—rather than stochastically—during aggregation, thereby preserving strict ε-LDP while significantly improving statistical accuracy. Its key innovation lies in shifting noise design from independent and identically distributed (i.i.d.) assumptions to coordinated, controllable construction, thus overcoming traditional LDP’s inability to model temporal dependencies. Experiments on real-world time-series datasets from healthcare and transportation domains demonstrate that the mechanism improves data utility by 23.6% over baseline LDP approaches and effectively resists aggregate-based inference attacks against individuals, enabling scalable, real-time privacy protection.
📝 Abstract
The rapid growth of smart devices—phones, wearables, IoT sensors, and connected vehicles—has led to an explosion of continuous time series data that offers valuable insights in healthcare, transportation, and more. However, this surge raises significant privacy concerns, as sensitive patterns can reveal personal details. While traditional differential privacy (DP) relies on trusted servers, local differential privacy (LDP) enables users to perturb their own data. However, traditional LDP methods perturb time series data by adding user-specific noise but exhibit vulnerabilities. For instance, noise applied within fixed time windows can be canceled during aggregation (e.g., averaging), enabling adversaries to infer individual statistics over time, thereby eroding privacy guarantees. To address these issues, we introduce a Cooperative Local Differential Privacy (CLDP) mechanism that enhances privacy by distributing noise vectors across multiple users. In our approach, noise is collaboratively generated and assigned so that when all users’ perturbed data is aggregated, the noise cancels out-preserving overall statistical properties while protecting individual privacy. This cooperative strategy not only counters vulnerabilities inherent in time-window-based methods but also scales effectively for large, real-time datasets, striking a better balance between data utility and privacy in multi-user environments.