🤖 AI Summary
To address the scarcity of labeled data and reliance on prior knowledge in few-shot specific emitter identification (SEI), this paper proposes a zero-prior, complex-domain joint framework achieving high-accuracy identification from only ten received symbols. Methodologically, we introduce complex-valued variational mode decomposition (C-VMD) to reconstruct raw signals and enhance hardware fingerprint representation; integrate a temporal convolutional network (TCN) with a time–space-decoupled spatial attention mechanism to model dynamic modulation characteristics; and design a lightweight branch transfer network that leverages pretrained knowledge without requiring auxiliary datasets. Evaluated on a public SEI benchmark, our approach achieves 96% identification accuracy using merely ten symbols per emitter—substantially outperforming existing few-shot SEI methods. Ablation studies confirm the effectiveness and synergistic contributions of each component.
📝 Abstract
Specific emitter identification (SEI) utilizes passive hardware characteristics to authenticate transmitters, providing a robust physical-layer security solution. However, most deep-learning-based methods rely on extensive data or require prior information, which poses challenges in real-world scenarios with limited labeled data. We propose an integrated complex variational mode decomposition algorithm that decomposes and reconstructs complex-valued signals to approximate the original transmitted signals, thereby enabling more accurate feature extraction. We further utilize a temporal convolutional network to effectively model the sequential signal characteristics, and introduce a spatial attention mechanism to adaptively weight informative signal segments, significantly enhancing identification performance. Additionally, the branch network allows leveraging pre-trained weights from other data while reducing the need for auxiliary datasets. Ablation experiments on the simulated data demonstrate the effectiveness of each component of the model. An accuracy comparison on a public dataset reveals that our method achieves 96% accuracy using only 10 symbols without requiring any prior knowledge.