Addressing Gradient Misalignment in Data-Augmented Training for Robust Speech Deepfake Detection

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In speech deepfake detection (SDD), conventional data augmentation (DA) induces misalignment between gradients of original and augmented samples, causing optimization conflicts and slow convergence. To address this, we propose a dual-path gradient alignment training framework that explicitly models and harmonizes the gradient directions of two backward propagation paths. Leveraging a shared backbone, the framework establishes a cooperative training mechanism with parallel original and augmented branches, integrated with the RawBoost augmentation strategy. Gradient direction regularization is introduced to mitigate parameter update conflicts, thereby significantly improving optimization efficiency. Evaluated on the In-the-Wild benchmark, our method achieves an 18.69% relative reduction in equal error rate (EER), substantially decreases required training iterations, and simultaneously enhances both detection accuracy and generalization capability.

Technology Category

Application Category

📝 Abstract
In speech deepfake detection (SDD), data augmentation (DA) is commonly used to improve model generalization across varied speech conditions and spoofing attacks. However, during training, the backpropagated gradients from original and augmented inputs may misalign, which can result in conflicting parameter updates. These conflicts could hinder convergence and push the model toward suboptimal solutions, thereby reducing the benefits of DA. To investigate and address this issue, we design a dual-path data-augmented (DPDA) training framework with gradient alignment for SDD. In our framework, each training utterance is processed through two input paths: one using the original speech and the other with its augmented version. This design allows us to compare and align their backpropagated gradient directions to reduce optimization conflicts. Our analysis shows that approximately 25% of training iterations exhibit gradient conflicts between the original inputs and their augmented counterparts when using RawBoost augmentation. By resolving these conflicts with gradient alignment, our method accelerates convergence by reducing the number of training epochs and achieves up to an 18.69% relative reduction in Equal Error Rate on the In-the-Wild dataset compared to the baseline.
Problem

Research questions and friction points this paper is trying to address.

Gradient misalignment between original and augmented inputs causes conflicting parameter updates
Conflicting updates hinder convergence and lead to suboptimal solutions in detection
Reduced benefits of data augmentation for robust speech deepfake detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-path framework for original and augmented inputs
Gradient alignment to reduce optimization conflicts
Reduced training epochs and improved detection accuracy
🔎 Similar Papers
No similar papers found.