🤖 AI Summary
To address the practical difficulty of implementing the “right to be forgotten” in mobile trajectory data, this paper proposes TraceHiding—the first scalable, importance-aware machine unlearning framework specifically designed for trajectory data. Methodologically, it (1) systematically models multi-level importance (token-, trajectory-, and user-level) by integrating statistical properties including coverage diversity, entropy, and sequence length; (2) introduces importance-weighted loss and teacher-student knowledge distillation to precisely unlearn high-impact, unique samples while preserving generalizable patterns; and (3) validates efficacy across multiple real-world trajectory datasets and models. Results show TraceHiding achieves unlearning speed 40× faster than full retraining, significantly reduces membership inference attack success rates, and incurs negligible test accuracy degradation—demonstrating both efficiency and robustness in privacy-preserving trajectory analytics.
📝 Abstract
This work introduces TraceHiding, a scalable, importance-aware machine unlearning framework for mobility trajectory data. Motivated by privacy regulations such as GDPR and CCPA granting users "the right to be forgotten," TraceHiding removes specified user trajectories from trained deep models without full retraining. It combines a hierarchical data-driven importance scoring scheme with teacher-student distillation. Importance scores--computed at token, trajectory, and user levels from statistical properties (coverage diversity, entropy, length)--quantify each training sample's impact, enabling targeted forgetting of high-impact data while preserving common patterns. The student model retains knowledge on remaining data and unlearns targeted trajectories through an importance-weighted loss that amplifies forgetting signals for unique samples and attenuates them for frequent ones. We validate on Trajectory--User Linking (TUL) tasks across three real-world higher-order mobility datasets (HO-Rome, HO-Geolife, HO-NYC) and multiple architectures (GRU, LSTM, BERT, ModernBERT, GCN-TULHOR), against strong unlearning baselines including SCRUB, NegGrad, NegGrad+, Bad-T, and Finetuning. Experiments under uniform and targeted user deletion show TraceHiding, especially its entropy-based variant, achieves superior unlearning accuracy, competitive membership inference attack (MIA) resilience, and up to 40 imes speedup over retraining with minimal test accuracy loss. Results highlight robustness to adversarial deletion of high-information users and consistent performance across models. To our knowledge, this is the first systematic study of machine unlearning for trajectory data, providing a reproducible pipeline with public code and preprocessing tools.