How to Connect Speech Foundation Models and Large Language Models? What Matters and What Does Not

📅 2024-09-25
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the key mechanisms underlying collaborative modeling among speech foundation models (SFMs), adapters, and large language models (LLMs). To this end, we conduct systematic ablation experiments on automatic speech recognition (ASR) and speech-to-text translation tasks, jointly evaluating five adapter architectures, two SFMs (Whisper and SeamlessM4T), and two LLMs (Mistral and Llama). Our results—first empirically established—reveal that the choice of SFM is the dominant factor governing downstream performance, yielding the largest performance gains. Adapter effectiveness is moderate but highly contingent on the specific SFM–LLM pairing, challenging the prevailing assumption that adapters alone drive optimization. Moreover, optimal configurations substantially improve cross-task generalization and robustness. This work establishes reproducible design principles and an empirical benchmark for speech–language multimodal alignment.

Technology Category

Application Category

📝 Abstract
The remarkable performance achieved by Large Language Models (LLM) has driven research efforts to leverage them for a wide range of tasks and input modalities. In speech-to-text (S2T) tasks, the emerging solution consists of projecting the output of the encoder of a Speech Foundational Model (SFM) into the LLM embedding space through an adapter module. However, no work has yet investigated how much the downstream-task performance depends on each component (SFM, adapter, LLM) nor whether the best design of the adapter depends on the chosen SFM and LLM. To fill this gap, we evaluate the combination of 5 adapter modules, 2 LLMs (Mistral and Llama), and 2 SFMs (Whisper and SeamlessM4T) on two widespread S2T tasks, namely Automatic Speech Recognition and Speech Translation. Our results demonstrate that the SFM plays a pivotal role in downstream performance, while the adapter choice has moderate impact and depends on the SFM and LLM.
Problem

Research questions and friction points this paper is trying to address.

Investigating component impact on speech-to-text performance
Evaluating adapter design dependence on SFM and LLM
Assessing SFM's pivotal role in downstream tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Connects SFM and LLM via adapter module
Evaluates 5 adapters with 2 LLMs and 2 SFMs
SFM crucial, adapter impact depends on SFM and LLM
🔎 Similar Papers
No similar papers found.
F
Francesco Verdini
La Sapienza, University of Rome, Italy; Translated, Italy; Pi School, Italy
P
Pierfrancesco Melucci
La Sapienza, University of Rome, Italy; Department of Engineering, Roma Tre University, Italy; Translated, Italy; Pi School, Italy
Stefano Perna
Stefano Perna
Department of Engineering, Roma Tre University, Italy; Fondazione Bruno Kessler, Italy
F
Francesco Cariaggi
Translated, Italy; Pi School, Italy
Marco Gaido
Marco Gaido
Fondazione Bruno Kessler
artificial intelligencenlpspeech translation
Sara Papi
Sara Papi
Researcher at FBK
Speech ProcessingSpeech TranslationMultimodal LLM
Szymon Mazurek
Szymon Mazurek
AGH University of Krakow, ACC Cyfronet AGH
Deep learningSpiking neural networksEdge AIHPCParallel Computing
Marek Kasztelnik
Marek Kasztelnik
ACK Cyfronet AGH
eciencedistributed computing
L
L. Bentivogli
Fondazione Bruno Kessler, Italy
S
S'ebastien Bratieres
Translated, Italy; Pi School, Italy
P
P. Merialdo
Department of Engineering, Roma Tre University, Italy
Simone Scardapane
Simone Scardapane
Associate Professor, Sapienza University
Machine LearningSignal Processing