🤖 AI Summary
To address the low accuracy of few-shot keyword spotting (FS-KWS) on edge devices under stringent false-alarm constraints, this paper proposes a lightweight self-supervised speech representation learning framework augmented with knowledge distillation. We innovatively incorporate sub-center ArcFace loss into the Wav2Vec 2.0 teacher model training and integrate attention-based feature dimensionality reduction to enhance inter-class separability and intra-class compactness. The learned representations are then distilled into a lightweight ResNet-15 student model optimized for edge deployment. Evaluated on the Google Speech Commands dataset under a 1% false acceptance rate constraint, our method achieves a 10-shot recognition accuracy of 74.1%, substantially outperforming prior approaches (up from 33.4%). The resulting model delivers high accuracy, strong robustness to acoustic variability, and practical feasibility for resource-constrained edge devices.
📝 Abstract
Keyword Spotting plays a critical role in enabling hands-free interaction for battery-powered edge devices. Few-Shot Keyword Spotting (FS-KWS) addresses the scalability and adaptability challenges of traditional systems by enabling recognition of custom keywords with only a few examples. However, existing FS-KWS systems achieve subpar accuracy at desirable false acceptance rates, particularly in resource-constrained edge environments. To address these issues, we propose a training scheme that leverages self-supervised learning models for robust feature extraction, dimensionality reduction, and knowledge distillation. The teacher model, based on Wav2Vec 2.0 is trained using Sub-center ArcFace loss, which enhances inter-class separability and intra-class compactness. To enable efficient deployment on edge devices, we introduce attention-based dimensionality reduction and train a standard lightweight ResNet15 student model. We evaluate the proposed approach on the English portion of the Multilingual Spoken Words Corpus (MSWC) and the Google Speech Commands (GSC) datasets. Notably, the proposed training method improves the 10-shot classification accuracy from 33.4% to 74.1% on 11 classes at 1% false alarm accuracy on the GSC dataset, thus making it significantly better-suited for a real use case scenario.