π€ AI Summary
To address the insufficient robustness of text-to-speech models in Singaporeβs multilingual, multi-dialect, and multi-accent environment, this work introduces the first end-to-end speech-language joint large language model (LLM) tailored for localized deployment. The model innovatively integrates an adaptive speech encoder, a multilingual tokenizer, and a cross-modal alignment module within a unified LLM architecture, enabling joint speech-text modeling with empathetic reasoning and cross-lingual semantic alignment. Evaluated on Singapore English (Singlish), Mandarin dialects, and code-mixed speech recognition and semantic understanding tasks, it significantly outperforms existing baselines. The proposed framework enhances accessibility and practical utility in multilingual settings and establishes a reusable methodology and benchmark for regionally adapted multimodal LLMs.
π Abstract
We introduce MERaLiON-AudioLLM (Multimodal Empathetic Reasoning and Learning in One Network), the first speech-text model tailored for Singapore's multilingual and multicultural landscape. Developed under the National Large Language Models Funding Initiative, Singapore, MERaLiON-AudioLLM integrates advanced speech and text processing to address the diverse linguistic nuances of local accents and dialects, enhancing accessibility and usability in complex, multilingual environments. Our results demonstrate improvements in both speech recognition and task-specific understanding, positioning MERaLiON-AudioLLM as a pioneering solution for region specific AI applications. We envision this release to set a precedent for future models designed to address localised linguistic and cultural contexts in a global framework.