🤖 AI Summary
Voice deepfakes pose significant audio privacy risks to users, yet existing defenses suffer from poor adaptability, reliance on white-box model knowledge, and high computational overhead.
Method: We propose a user-centric, real-time black-box defense framework that introduces a novel, few-shot user-sample-driven universal frequency-domain perturbation mechanism—requiring no access to target models—and supports cross-length audio generalization and lightweight online deployment. Our approach integrates frequency-domain noise injection, black-box adversarial learning, and few-shot fine-tuning to jointly optimize speech intelligibility and privacy robustness.
Contribution/Results: Extensive experiments demonstrate effectiveness against six text-to-speech and five speaker verification models. The framework achieves ultra-low memory consumption (0.004 GB), accelerates inference by 3×–7000× over baselines, and exhibits strong transferability and resilience against adaptive attacks.
📝 Abstract
The rapid advancement of voice deepfake technologies has raised serious concerns about user audio privacy, as attackers increasingly exploit publicly available voice data to generate convincing fake audio for malicious purposes such as identity theft, financial fraud, and misinformation campaigns. While existing defense methods offer partial protection, they face critical limitations, including weak adaptability to unseen user data, poor scalability to long audio, rigid reliance on white-box knowledge, and high computational and temporal costs during the encryption process. To address these challenges and defend against personalized voice deepfake threats, we propose Enkidu, a novel user-oriented privacy-preserving framework that leverages universal frequential perturbations generated through black-box knowledge and few-shot training on a small amount of user data. These highly malleable frequency-domain noise patches enable real-time, lightweight protection with strong generalization across variable-length audio and robust resistance to voice deepfake attacks, all while preserving perceptual quality and speech intelligibility. Notably, Enkidu achieves over 50 to 200 times processing memory efficiency (as low as 0.004 gigabytes) and 3 to 7000 times runtime efficiency (real-time coefficient as low as 0.004) compared to six state-of-the-art countermeasures. Extensive experiments across six mainstream text-to-speech models and five cutting-edge automated speaker verification models demonstrate the effectiveness, transferability, and practicality of Enkidu in defending against both vanilla and adaptive voice deepfake attacks.