🤖 AI Summary
In simultaneous speech translation (SimulST), multi-task learning induces gradient conflicts between primary and auxiliary tasks, degrading translation quality and inflating GPU memory consumption. To address this, we propose a module-level gradient conflict detection and projection mitigation mechanism. Unlike conventional model-level approaches, our method performs fine-grained conflict identification and orthogonal gradient projection at the module level, significantly improving optimization efficiency. Integrated into a modular network architecture, it achieves a 0.68 BLEU gain under medium-to-high latency settings while reducing GPU memory usage by over 95%, thus balancing translation quality and real-time performance. To the best of our knowledge, this is the first work to refine gradient conflict mitigation to the module level, establishing a new paradigm for efficient, low-resource SimulST.
📝 Abstract
Simultaneous Speech Translation (SimulST) involves generating target language text while continuously processing streaming speech input, presenting significant real-time challenges. Multi-task learning is often employed to enhance SimulST performance but introduces optimization conflicts between primary and auxiliary tasks, potentially compromising overall efficiency. The existing model-level conflict resolution methods are not well-suited for this task which exacerbates inefficiencies and leads to high GPU memory consumption. To address these challenges, we propose a Modular Gradient Conflict Mitigation (MGCM) strategy that detects conflicts at a finer-grained modular level and resolves them utilizing gradient projection. Experimental results demonstrate that MGCM significantly improves SimulST performance, particularly under medium and high latency conditions, achieving a 0.68 BLEU score gain in offline tasks. Additionally, MGCM reduces GPU memory consumption by over 95% compared to other conflict mitigation methods, establishing it as a robust solution for SimulST tasks.