🤖 AI Summary
This work addresses a critical gap in the evaluation of adversarial robustness for automatic speech recognition (ASR) systems, which has predominantly focused on accuracy degradation while overlooking the deterioration of inference efficiency. To this end, we propose MORE, the first multi-objective adversarial attack framework that simultaneously induces high word error rates and generates excessively long, redundant transcriptions through a single perturbation. MORE integrates a hierarchical repulsion–anchoring mechanism with a Repeat-Encouraging Duplication (REDO) strategy, coupled with a periodic sequence-length amplification scheme to substantially increase computational overhead. Experimental results demonstrate that MORE consistently outperforms existing baselines by not only maintaining high recognition error but also significantly elongating output transcripts, thereby exposing a previously underappreciated vulnerability of ASR systems along the efficiency dimension.
📝 Abstract
The emergence of large-scale automatic speech recognition (ASR) models such as Whisper has greatly expanded their adoption across diverse real-world applications. Ensuring robustness against even minor input perturbations is therefore critical for maintaining reliable performance in real-time environments. While prior work has mainly examined accuracy degradation under adversarial attacks, robustness with respect to efficiency remains largely unexplored. This narrow focus provides only a partial understanding of ASR model vulnerabilities. To address this gap, we conduct a comprehensive study of ASR robustness under multiple attack scenarios. We introduce MORE, a multi-objective repetitive doubling encouragement attack, which jointly degrades recognition accuracy and inference efficiency through a hierarchical staged repulsion-anchoring mechanism. Specifically, we reformulate multi-objective adversarial optimization into a hierarchical framework that sequentially achieves the dual objectives. To further amplify effectiveness, we propose a novel repetitive encouragement doubling objective (REDO) that induces duplicative text generation by maintaining accuracy degradation and periodically doubling the predicted sequence length. Overall, MORE compels ASR models to produce incorrect transcriptions at a substantially higher computational cost, triggered by a single adversarial input. Experiments show that MORE consistently yields significantly longer transcriptions while maintaining high word error rates compared to existing baselines, underscoring its effectiveness in multi-objective adversarial attack.