🤖 AI Summary
This work addresses the challenges of catastrophic forgetting and high computational cost in multilingual self-supervised learning when continuously acquiring new languages. To this end, the authors propose MiLorE-SSL, a framework that integrates lightweight low-rank adaptation (LoRA) with a soft mixture-of-experts (soft MoE) mechanism, complemented by a limited replay strategy. Operating with only 2.14% trainable parameters, MiLorE-SSL effectively mitigates cross-lingual interference and forgetting. Evaluated on the ML-SUPERB benchmark, the method not only significantly improves performance on newly added languages but also enhances representations for previously learned ones, demonstrating its efficacy and scalability for continual multilingual speech representation learning.
📝 Abstract
Self-supervised learning (SSL) has greatly advanced speech representation learning, but multilingual SSL models remain constrained to languages encountered during pretraining. Retraining from scratch to incorporate new languages is computationally expensive, while sequential training without migitation strategies often leads to catastrophic forgetting. To address this, we propose MiLorE-SSL, a lightweight framework that combines LoRA modules with a soft mixture-of-experts (MoE) mechanism for efficient continual multilingual training. LoRA provides efficient low-rank adaptation, while soft MoE promotes flexible expert sharing across languages, reducing cross-lingual interference. To further mitigate forgetting, we introduce limited replay data from existing languages, avoiding reliance on large historical corpora. Experiments on ML-SUPERB demonstrate that MiLorE-SSL achieves strong performance in new languages and improves the ability in existing ones with only 2.14% trainable parameters.