🤖 AI Summary
This work addresses the limited robustness of current automatic speech recognition (ASR) systems under distribution shifts—such as background noise or speaker accents—and the susceptibility of test-time adaptation methods to confirmation bias caused by high-confidence erroneous predictions. To mitigate these issues, the authors propose ASR-TRA, a novel framework that introduces causal intervention into test-time adaptation for the first time. ASR-TRA jointly optimizes the ASR model and a learnable decoder prompt during inference via reinforcement learning. It generates diverse transcription candidates through temperature-controlled stochastic decoding and leverages a reward signal based on audio-text semantic alignment to guide adaptation. This approach effectively alleviates confirmation bias, enhances adaptation stability and interpretability, and achieves significant performance gains over existing methods on noisy LibriSpeech and L2 Arctic accented speech benchmarks, all while maintaining low latency.
📝 Abstract
Recently, Automatic Speech Recognition (ASR) systems (e.g., Whisper) have achieved remarkable accuracy improvements but remain highly sensitive to real-world unseen data (data with large distribution shifts), including noisy environments and diverse accents. To address this issue, test-time adaptation (TTA) has shown great potential in improving the model adaptability at inference time without ground-truth labels, and existing TTA methods often rely on pseudo-labeling or entropy minimization. However, by treating model confidence as a learning signal, these methods may reinforce high-confidence errors, leading to confirmation bias that undermines adaptation. To overcome these limitations, we present ASR-TRA, a novel Test-time Reinforcement Adaptation framework inspired by causal intervention. More precisely, our method introduces a learnable decoder prompt and utilizes temperature-controlled stochastic decoding to generate diverse transcription candidates. These are scored by a reward model that measures audio-text semantic alignment, and the resulting feedback is used to update both model and prompt parameters via reinforcement learning. Comprehensive experiments on LibriSpeech with synthetic noise and L2 Arctic accented English datasets demonstrate that our method achieves higher accuracy while maintaining lower latency than existing TTA baselines. Ablation studies further confirm the effectiveness of combining audio and language-based rewards, highlighting our method's enhanced stability and interpretability. Overall, our approach provides a practical and robust solution for deploying ASR systems in challenging real-world conditions.