Can Current Detectors Catch Face-to-Voice Deepfake Attacks?

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the detection efficacy of mainstream audio deepfake detectors against FOICE (Face-to-Speech) attacks—synthetic speech generated from facial images—and finds them severely compromised in both clean and noisy conditions. Method: To address this vulnerability, we propose a lightweight supervised fine-tuning strategy specifically tailored to FOICE data, preserving the original detector architecture while adapting its decision boundaries via targeted training. Contribution/Results: Experiments demonstrate substantial gains in FOICE detection accuracy post-fine-tuning; however, robustness degrades on unseen generative models (e.g., SpeechT5), exposing an inherent trade-off between specialization and generalization. This work establishes the first benchmark for FOICE detectability, introduces a generator-aware fine-tuning paradigm, and provides empirical evidence and design insights critical for adaptive optimization and generalization assurance of audio deepfake detectors.

Technology Category

Application Category

📝 Abstract
The rapid advancement of generative models has enabled the creation of increasingly stealthy synthetic voices, commonly referred to as audio deepfakes. A recent technique, FOICE [USENIX'24], demonstrates a particularly alarming capability: generating a victim's voice from a single facial image, without requiring any voice sample. By exploiting correlations between facial and vocal features, FOICE produces synthetic voices realistic enough to bypass industry-standard authentication systems, including WeChat Voiceprint and Microsoft Azure. This raises serious security concerns, as facial images are far easier for adversaries to obtain than voice samples, dramatically lowering the barrier to large-scale attacks. In this work, we investigate two core research questions: (RQ1) can state-of-the-art audio deepfake detectors reliably detect FOICE-generated speech under clean and noisy conditions, and (RQ2) whether fine-tuning these detectors on FOICE data improves detection without overfitting, thereby preserving robustness to unseen voice generators such as SpeechT5. Our study makes three contributions. First, we present the first systematic evaluation of FOICE detection, showing that leading detectors consistently fail under both standard and noisy conditions. Second, we introduce targeted fine-tuning strategies that capture FOICE-specific artifacts, yielding significant accuracy improvements. Third, we assess generalization after fine-tuning, revealing trade-offs between specialization to FOICE and robustness to unseen synthesis pipelines. These findings expose fundamental weaknesses in today's defenses and motivate new architectures and training protocols for next-generation audio deepfake detection.
Problem

Research questions and friction points this paper is trying to address.

Detecting face-to-voice deepfake attacks bypassing authentication systems
Evaluating audio deepfake detectors' reliability under various conditions
Improving detection accuracy while maintaining robustness to unseen generators
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects FOICE deepfakes using targeted fine-tuning strategies
Evaluates generalization trade-offs post-fine-tuning for robustness
Exposes defense weaknesses to motivate new detection architectures
🔎 Similar Papers
No similar papers found.
N
Nguyen Linh Bao Nguyen
CSIRO’s Data61, Melbourne, Australia
A
Alsharif Abuadbba
CSIRO’s Data61, Sydney, Australia
Kristen Moore
Kristen Moore
Team Lead - CSIRO's Data61
AI SecurityAI SafetyAI for Cyber Security
T
Tingming Wu
CSIRO’s Data61, Melbourne, Australia