đ¤ AI Summary
This work exposes a critical security vulnerability in deep learningâbased radio frequency (RF) fingerprinting systems under domain shift: models suffer from entanglement between device-specific RF fingerprints and environment- or signal-patternâdependent features, leading to cross-domain targeted misclassificationâand enabling exploitation as a backdoor attack vector. Through an adversarial-driven, end-to-end evaluationâspanning raw-signal modeling, multi-scenario real-world measurements, domain adaptation analysis, and adversarial example generationâwe systematically demonstrate, for the first time, that this vulnerability enables high-success-rate spoofing attacks in practical wireless environments. We identify feature entanglement as the root cause, whose induced anomalous classification behavior evades conventional defenses such as confidence-threshold filtering. Our findings formally define a novel âentanglement-drivenâ attack surface and advocate a paradigm shift in RF fingerprinting system designâtoward feature disentanglement as a foundational principle for achieving both robustness and security.
đ Abstract
Radio frequency (RF) fingerprinting, which extracts unique hardware imperfections of radio devices, has emerged as a promising physical-layer device identification mechanism in zero trust architectures and beyond 5G networks. In particular, deep learning (DL) methods have demonstrated state-of-the-art performance in this domain. However, existing approaches have primarily focused on enhancing system robustness against temporal and spatial variations in wireless environments, while the security vulnerabilities of these DL-based approaches have often been overlooked. In this work, we systematically investigate the security risks of DL-based RF fingerprinting systems through an adversarial-driven experimental analysis. We observe a consistent misclassification behavior for DL models under domain shifts, where a device is frequently misclassified as another specific one. Our analysis based on extensive real-world experiments demonstrates that this behavior can be exploited as an effective backdoor to enable external attackers to intrude into the system. Furthermore, we show that training DL models on raw received signals causes the models to entangle RF fingerprints with environmental and signal-pattern features, creating additional attack vectors that cannot be mitigated solely through post-processing security methods such as confidence thresholds.