🤖 AI Summary
This work addresses the risk of location privacy leakage in dashcam videos, which persists even after GPS metadata removal due to identifiable visual cues in the background. To mitigate this, the authors propose the PPEDCRF framework, which employs a dynamic conditional random field (CRF) to track location-sensitive regions across video frames. The approach integrates hierarchical sensitivity modeling and a normalized control penalty mechanism to apply graded perturbations to sensitive background content. Concurrently, it introduces a utility-preserving noise injection strategy to maintain high performance in foreground tasks such as object detection and semantic segmentation. Experiments on public driving datasets demonstrate that the method significantly reduces the success rate of location retrieval attacks—measured by Top-k accuracy—while preserving object detection mAP and segmentation accuracy comparable to state-of-the-art baselines.
📝 Abstract
Dashcam videos collected by autonomous or assisted-driving systems are increasingly shared for safety auditing and model improvement. Even when explicit GPS metadata are removed, an attacker can still infer the recording location by matching background visual cues (e.g., buildings and road layouts) against large-scale street-view imagery. This paper studies location-privacy leakage under a background-based retrieval attacker, and proposes PPEDCRF, a privacy-preserving enhanced dynamic conditional random field framework that injects calibrated perturbations only into inferred location-sensitive background regions while preserving foreground detection utility. PPEDCRF consists of three components: (i) a dynamic CRF that enforces temporal consistency to discover and track location sensitive regions across frames, (ii) a normalized control penalty (NCP) that allocates perturbation strength according to a hierarchical sensitivity model, and (iii) a utility-preserving noise injection module that minimizes interference to object detection and segmentation. Experiments on public driving datasets demonstrate that PPEDCRF significantly reduces location-retrieval attack success (e.g., Top-k retrieval accuracy) while maintaining competitive detection performance (e.g., mAP and segmentation metrics) compared with common baselines such as global noise, white-noise masking, and feature-based anonymization. The source code is in https://github.com/mabo1215/PPEDCRF.git