🤖 AI Summary
In multilingual, multitask speech processing (MSP), objectives such as automatic speech recognition (ASR) and speech translation (ST) exhibit strong gradient conflicts, hindering convergence of conventional flat multi-objective optimization (MOO) and causing substantial performance degradation as task count increases. To address this, we propose a hierarchical MOO framework—“Objective Soup”—which identifies conflict-prone layers via gradient conflict analysis and introduces a lightweight layer-selection mechanism that applies conflict-avoiding gradient updates only at critical layers. This decouples ASR and ST into a two-tier architecture, enabling inter-task gradient coordination. Experiments on CoVoST v2, LibriSpeech, and AISHELL-1 demonstrate that our method significantly outperforms flat MOO: in joint multilingual ASR and ST training, it improves BLEU by +1.8 and reduces WER by −0.9%, while cutting training overhead by 37%. The approach thus delivers both superior accuracy and scalability.
📝 Abstract
Training a single model for multilingual, multi-task speech processing (MSP) is severely hampered by conflicting objectives between tasks like speech recognition and translation. While multi-objective optimization (MOO) aims to align gradient updates, its effectiveness diminishes as the number of tasks grows, making it difficult to find a common descent direction. This raises a fundamental question: should highly conflicting objectives be optimized jointly or separated into a hierarchical structure? To address this question, this paper investigates three multi-objective MSP formulations, which we refer to as extbf{objective soup recipes}. These formulations apply multi-objective optimization at different optimization levels to mitigate potential conflicts among all objectives. To ensure efficiency, we introduce a lightweight layer-selection mechanism that computes the conflict-avoiding gradient using only the most problematic layers, minimizing computational and memory overhead. Extensive experiments on CoVoST v2, LibriSpeech, and AISHELL-1 reveal that a bi-level recipe separating recognition and translation tasks consistently outperforms standard flat optimization. Our work demonstrates that hierarchical MOO is a more effective and scalable approach for building state-of-the-art MSP models. Our code has been released at https://github.com/afmsaif/Objective_Soups.