Precision-Varying Prediction (PVP): Robustifying ASR systems against adversarial attacks

πŸ“… 2026-03-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work proposes a training-free, unified defense strategy against adversarial attacks on automatic speech recognition (ASR) systems, which are known to be highly vulnerable and lack robustness. The approach leverages the sensitivity discrepancy of adversarial examples across model precisions during inference by randomly sampling precision levels and constructing a detection mechanism based on a Gaussian classifier. For the first time, this method reveals and exploits the distinctive response characteristics of adversarial samples under variable-precision inference. Extensive evaluations demonstrate that the proposed technique significantly enhances system robustness across diverse ASR architectures and attack types while enabling efficient detection of adversarial inputs.

Technology Category

Application Category

πŸ“ Abstract
With the increasing deployment of automated and agentic systems, ensuring the adversarial robustness of automatic speech recognition (ASR) models has become critical. We observe that changing the precision of an ASR model during inference reduces the likelihood of adversarial attacks succeeding. We take advantage of this fact to make the models more robust by simple random sampling of the precision during prediction. Moreover, the insight can be turned into an adversarial example detection strategy by comparing outputs resulting from different precisions and leveraging a simple Gaussian classifier. An experimental analysis demonstrates a significant increase in robustness and competitive detection performance for various ASR models and attack types.
Problem

Research questions and friction points this paper is trying to address.

adversarial robustness
automatic speech recognition
adversarial attacks
ASR systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Precision-Varying Prediction
adversarial robustness
automatic speech recognition
adversarial example detection
numerical precision
πŸ”Ž Similar Papers
No similar papers found.
M
MatΓ­as Pizarro
Faculty of Computer Science, Ruhr University Bochum, Germany
R
Raghavan Narasimhan
Faculty of Computer Science, Ruhr University Bochum, Germany
Asja Fischer
Asja Fischer
Professor for Machine Learning, Ruhr University Bochum
machine learningdeep learningprobabilistic models