🤖 AI Summary
To address degraded automatic speech recognition (ASR) performance for Catalan–Spanish code-switching (CS) due to scarcity of authentic CS data and high linguistic similarity, this paper proposes a lightweight fine-tuning strategy integrating synthetic CS data and explicit language tagging. Methodologically: (1) high-fidelity CS speech is synthesized and concatenated with monolingual utterances; (2) explicit language identifiers are injected into Whisper’s input sequence; and (3) the model is jointly fine-tuned on this augmented dataset alongside a small amount of authentic CS speech. Experiments demonstrate substantial improvements over monolingual and mixed-language baselines, achieving an 18.7% relative WER reduction on the CS test set. Key contributions include: (i) releasing CAT-ES-Whisper—the first open-source Whisper model fine-tuned specifically for Catalan–Spanish CS ASR; (ii) empirically validating the synergistic benefit of language tagging and controllable synthetic data; and (iii) establishing a reproducible framework for low-resource multilingual CS ASR.
📝 Abstract
Code-switching (CS), the alternating use of two or more languages, challenges automatic speech recognition (ASR) due to scarce training data and linguistic similarities. The lack of dedicated CS datasets limits ASR performance, as most models rely on monolingual or mixed-language corpora that fail to reflect real-world CS patterns. This issue is critical in multilingual societies where CS occurs in informal and formal settings. A key example is Catalan-Spanish CS, widely used in media and parliamentary speeches. In this work, we improve ASR for Catalan-Spanish CS by exploring three strategies: (1) generating synthetic CS data, (2) concatenating monolingual audio, and (3) leveraging real CS data with language tokens. We extract CS data from Catalan speech corpora and fine-tune OpenAI's Whisper models, making them available on Hugging Face. Results show that combining a modest amount of synthetic CS data with the dominant language token yields the best transcription performance.