🤖 AI Summary
This work exposes a critical vulnerability of Local Differential Privacy (LDP) trajectory protocols under data poisoning attacks: an adversary can significantly inflate the frequency of a target trajectory pattern in the perturbed aggregate by injecting only a small number of carefully crafted fake trajectories. To exploit this weakness, we propose TraP—the first efficient poisoning attack framework tailored to trajectory data—featuring a novel prefix-suffix heuristic that reduces the computational complexity of fake trajectory generation from exponential to polynomial time while preserving attack efficacy. Leveraging inverse modeling of LDP protocols, quantitative trajectory pattern characterization, and perturbation response analysis, we validate TraP across multiple mainstream LDP trajectory mechanisms. Experiments show that a small fraction of malicious users suffices to amplify the target pattern’s frequency by several-fold. This study provides the first systematic characterization of security blind spots in LDP-based trajectory analytics, establishing foundational theoretical insights and empirical evidence to guide the design of robust protocols and effective defenses.
📝 Abstract
Trajectory data, which tracks movements through geographic locations, is crucial for improving real-world applications. However, collecting such sensitive data raises considerable privacy concerns. Local differential privacy (LDP) offers a solution by allowing individuals to locally perturb their trajectory data before sharing it. Despite its privacy benefits, LDP protocols are vulnerable to data poisoning attacks, where attackers inject fake data to manipulate aggregated results. In this work, we make the first attempt to analyze vulnerabilities in several representative LDP trajectory protocols. We propose extsc{TraP}, a heuristic algorithm for data underline{P}oisoning attacks using a prefix-suffix method to optimize fake underline{Tra}jectory selection, significantly reducing computational complexity. Our experimental results demonstrate that our attack can substantially increase target pattern occurrences in the perturbed trajectory dataset with few fake users. This study underscores the urgent need for robust defenses and better protocol designs to safeguard LDP trajectory data against malicious manipulation.