MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models

πŸ“… 2024-12-13
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the insufficient robustness of text-to-speech models in Singapore’s multilingual, multi-dialect, and multi-accent environment, this work introduces the first end-to-end speech-language joint large language model (LLM) tailored for localized deployment. The model innovatively integrates an adaptive speech encoder, a multilingual tokenizer, and a cross-modal alignment module within a unified LLM architecture, enabling joint speech-text modeling with empathetic reasoning and cross-lingual semantic alignment. Evaluated on Singapore English (Singlish), Mandarin dialects, and code-mixed speech recognition and semantic understanding tasks, it significantly outperforms existing baselines. The proposed framework enhances accessibility and practical utility in multilingual settings and establishes a reusable methodology and benchmark for regionally adapted multimodal LLMs.

Technology Category

Application Category

πŸ“ Abstract
We introduce MERaLiON-AudioLLM (Multimodal Empathetic Reasoning and Learning in One Network), the first speech-text model tailored for Singapore's multilingual and multicultural landscape. Developed under the National Large Language Models Funding Initiative, Singapore, MERaLiON-AudioLLM integrates advanced speech and text processing to address the diverse linguistic nuances of local accents and dialects, enhancing accessibility and usability in complex, multilingual environments. Our results demonstrate improvements in both speech recognition and task-specific understanding, positioning MERaLiON-AudioLLM as a pioneering solution for region specific AI applications. We envision this release to set a precedent for future models designed to address localised linguistic and cultural contexts in a global framework.
Problem

Research questions and friction points this paper is trying to address.

Multilingual Processing
Dialect Variation
Accent Handling
Innovation

Methods, ideas, or system contributions that make the work stand out.

MERaLiON-AudioLLM
multilingual-multicultural-integration
speech-text-unification
πŸ”Ž Similar Papers
No similar papers found.
Y
Yingxu He
Institute for Infocomm Research (I2R), A*STAR, Singapore
Zhuohan Liu
Zhuohan Liu
Research Engineer
Shuo Sun
Shuo Sun
Johns Hopkins University
B
Bin Wang
Institute for Infocomm Research (I2R), A*STAR, Singapore
W
Wenyu Zhang
Institute for Infocomm Research (I2R), A*STAR, Singapore
X
Xunlong Zou
Institute for Infocomm Research (I2R), A*STAR, Singapore
Nancy F. Chen
Nancy F. Chen
ISCA Fellow, AAIA Fellow, Multimodal Generative AI Group Leader, AI for Education Head at A*STAR
Agentic AILarge Language ModelsConversational AI
AiTi Aw
AiTi Aw
Aw Ai Ti