🤖 AI Summary
This work addresses the critical challenge posed by deepfake audio to cybersecurity by proposing a hierarchical contrastive attention framework. The method introduces, for the first time, a hierarchical attention mechanism that jointly models temporal and structural dependencies across time frames, adjacent layers, and layer groups. Integrated with margin-based contrastive learning, the framework enhances domain invariance in self-supervised speech representations. Evaluated in an end-to-end setting, the model demonstrates substantial improvements in cross-domain generalization, achieving equal error rates (EER) of 1.93% and 6.87% on the ASVspoof 2021 DF and In-the-Wild datasets, respectively—outperforming existing layer-weighting approaches by 36.6% and 22.5%.
📝 Abstract
Audio deepfakes generated by modern TTS and voice conversion systems are increasingly difficult to distinguish from real speech, raising serious risks for security and online trust. While state-of-the-art self-supervised models provide rich multi-layer representations, existing detectors treat layers independently and overlook temporal and hierarchical dependencies critical for identifying synthetic artefacts. We propose HierCon, a hierarchical layer attention framework combined with margin-based contrastive learning that models dependencies across temporal frames, neighbouring layers, and layer groups, while encouraging domain-invariant embeddings. Evaluated on ASVspoof 2021 DF and In-the-Wild datasets, our method achieves state-of-the-art performance (1.93% and 6.87% EER), improving over independent layer weighting by 36.6% and 22.5% respectively. The results and attention visualisations confirm that hierarchical modelling enhances generalisation to cross-domain generation techniques and recording conditions.