Objective Soups: Multilingual Multi-Task Modeling for Speech Processing

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multilingual, multitask speech processing (MSP), objectives such as automatic speech recognition (ASR) and speech translation (ST) exhibit strong gradient conflicts, hindering convergence of conventional flat multi-objective optimization (MOO) and causing substantial performance degradation as task count increases. To address this, we propose a hierarchical MOO framework—“Objective Soup”—which identifies conflict-prone layers via gradient conflict analysis and introduces a lightweight layer-selection mechanism that applies conflict-avoiding gradient updates only at critical layers. This decouples ASR and ST into a two-tier architecture, enabling inter-task gradient coordination. Experiments on CoVoST v2, LibriSpeech, and AISHELL-1 demonstrate that our method significantly outperforms flat MOO: in joint multilingual ASR and ST training, it improves BLEU by +1.8 and reduces WER by −0.9%, while cutting training overhead by 37%. The approach thus delivers both superior accuracy and scalability.

Technology Category

Application Category

📝 Abstract
Training a single model for multilingual, multi-task speech processing (MSP) is severely hampered by conflicting objectives between tasks like speech recognition and translation. While multi-objective optimization (MOO) aims to align gradient updates, its effectiveness diminishes as the number of tasks grows, making it difficult to find a common descent direction. This raises a fundamental question: should highly conflicting objectives be optimized jointly or separated into a hierarchical structure? To address this question, this paper investigates three multi-objective MSP formulations, which we refer to as extbf{objective soup recipes}. These formulations apply multi-objective optimization at different optimization levels to mitigate potential conflicts among all objectives. To ensure efficiency, we introduce a lightweight layer-selection mechanism that computes the conflict-avoiding gradient using only the most problematic layers, minimizing computational and memory overhead. Extensive experiments on CoVoST v2, LibriSpeech, and AISHELL-1 reveal that a bi-level recipe separating recognition and translation tasks consistently outperforms standard flat optimization. Our work demonstrates that hierarchical MOO is a more effective and scalable approach for building state-of-the-art MSP models. Our code has been released at https://github.com/afmsaif/Objective_Soups.
Problem

Research questions and friction points this paper is trying to address.

Address conflicting objectives in multilingual multi-task speech processing
Investigate hierarchical vs flat multi-objective optimization approaches
Propose lightweight layer-selection to mitigate gradient conflicts efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical multi-objective optimization for MSP
Lightweight layer-selection for conflict-avoiding gradients
Bi-level recipe separates recognition and translation
🔎 Similar Papers
No similar papers found.